Thursday, April 25, 2013

Veilbook

NHS privatisation gains its craggy taloned footing on a crucial moss-slicked stepping stone this week, in the context of under-reporting and misleading reporting by both the BBC and fully marketised news media.

"Yet, for some individual or group, the mainstream media blackout was not enough; they wiped Facebook clean of our dissent too. We must take this as a note of caution and a reminder that Facebook and other social media sites are not free spaces, they are owned by corporations. If someone came and clasped their hand over your mouth in the street, there would be avenues for redress. If Facebook does the same, options are limited" (Scriptonite).

Does Facebook censor political content? 

Yes and no is my best guess. It's worth trying to imagine, in concrete detail, how censorship might be embedded in Facebook's operations. Not that there aren't Evil Corporations (there basically are), but we shouldn't lose sight of the various org hierarchies and processes and codes of conduct of that Evil, how that Evil is embodied in the lives of various individuals committing various individual acts, according to narratives which let them sleep at night. Like, "All that is necessary for evil to triumph is for evil people to do nothing."

(a) Say there's always somebody in a Facebook office ready to field calls from Key Stakeholders about their publicity concerns. Maybe you can get on that list by holding a lot of shares, buying a lot of FB ad space, a personal recommendation from senior management, etc. So there would be a phone call from time-to-time, raising concerns that a particular viral item is without factual basis, is libellous, or violates copyright. No proof would be necessary: just a plausible enough complaint to lead to a temporary (i.e. permanent) take-down. I can just about credit such an arrangement existing. It doesn't seem a particularly good fit for the events Scriptonite describes though. Who would have made that call? What would the pretence have been? I wouldn't quite rule it out, but ...

(b) More likely, we the public censored the article when creeps among us clicked "Report Story or Spam." We abused that function, because some of us, for a variety of reasons, get enraged by the wrong things. It is possible that Facebook employees then undertook sort of evaluation as to whether the content violated the Terms of Service. But I wouldn't be surprised if the take-down kicked in automatically, when some reporting threshold was reached.

We could try to generalise, BTW, about what kinds of item tend to get zotzed in this way. Something which is in principle incredibly divisive and vitriol-a-genic could rattle around safely inside a stovepiped network for ages, only being shared among people who can tolerate it. In other words, an exemplary risk would be an item which angers your friends, not just your foes. (I do find it difficult to get inside the psychology that wrathfully marks as spam rather than de-friends, a hint I might not have this quite right). I wonder if there were a lot of take-downs during during the 2011 riots? Also: I suspect these processes are pretty resistant, but not impervious to monetisation. Malicious reporting could be part of someone's job.

Anyway, the key thing is: it's still Facebook's responsibility to stop this from happening. But it's not a case-by-case responsibility (or not only a case-by-case responsibility). It's a matter of systems design. Facebook needs to work out the signature of political speech and/or divisive speech being misreported as spam and find ways to protect it. Their systems no doubt put a lot of weight on a list of stipulated trusted sources (The Guardian, The Mail). That's papering over the cracks. In fact it's not even paper. It's some kind of nang guacamole. Instead, Facebook need to design an architecture within which we can correctly crowdsource the initial judgment as to whether some particular item is legitimate or not, regardless of its domain. One complementary possibility is a transitional status for an item that is suspected of being spam, letting the self-identified digitally literate make up their own minds. If something is taken down, there needs to be obvious, effective appeal functionality, and there need to be swift, bold humans evaluating those appeals and giving reasons for their decisions. Of course in a crisis, Facebook is completely unreliable. But for day-to-day stuff, it's an amphitheatre worth fighting for. It's a Great Space.

(c) It's some kind of weird glitch which affected that article at random. Maybe stuff disappears all the time and we tend to notice more when we can attribute it to the actions of a vigilant antagonist. I think it's perfectly possible to prefer this option to option (a) whilst still having my distrust of corporate communications, corporate activities and corporate ideology and culture set to maximum.

UPDATE: Horrible Telegraph article by Willard Foxton makes queasy cause with me: "a small amount of code that Facebook's anti-spam algorithms recognised as spam embedded in her site; hence, people received a warning that the link was potentially dangerous by clicking on it. When enough people clicked "report spam", the post was automatically taken down."

No comments:

Post a Comment