Web Mega-Coalition to Create 'Terror Fingerprint' Countermeasures
France24 |
In practice, the plan will require moderators from each company to flag content they believe falls into the category of 'the most extreme and egregious terrorist images and videos'. Every piece of image-based content on the sites already has what's known as a "hash"; which basically acts as a "digital fingerprint" for that specific video or image. Before now, if YouTube caught out a certain video which they deemed unsuitable, they would either remove it or intervene by some other means. Now, however, the four companies involved can also now choose to put certain information about that content - including its hash - into their coalition database: making it possible for the others to scan for the same unique fingerprint on their own platforms, and thereby also remove it if they wish.
The intention of this new initiative, at least in part, is almost certainly to create a strong firewall against the circulation of such content as the beheading by Daesh of reporter James Foley, which was widely covered in 2013 after being uploaded to various social media websites.
One way of pinning-down what it actually means, however, is to describe the coalition is in terms of its being a huge exercise in information-sharing with a view to keeping the public sphere in general free from extreme content. It's also worth noting that these particular data-sharers are some of the world's largest technology companies, whose combined user figures are upward of 4.3 billion people. Whilst many of these are surely users of all four platforms (meaning the figure would be smaller), we are nonetheless talking about some huge data-sifting.
But it's worth bearing in mind that such an endeavour is accompanied by certain fundamental problems. For example, we still have to confront the chronic lack of accountability with which such companies operate, and will continue to operate with regards this new deal. In 2012, for instance, Facebook was estimated to have about 1,000 moderators, whom the Telegraph then described as 'outsourced' and 'unvetted' workers employed for about $1 an hour. Although there have also been moves toward automated censorship of extremist content by YouTube and Facebook as of late, the exact algorithms of which are still closed in a 'secret sauce' jar.
Moreover, critics have long warned of the ramifications of handing editorial power to private entities which have limited obligations regarding transparency when it comes to their methodologies and, more specifically, their selection criteria when editing content. For now, we can probably expect the companies involved to edit along fairly normative lines; but what the future holds (and, indeed, whether normative is good) is a very hard thing to figure out. Facebook, for one, has previously promised that 'any content celebrating terrorism is removed.' But if we care to ask whether the precedent set by our acquiescence to such a measure opens the door to less clear-cut measures (or, indeed, whether it already has elsewhere), we'll soon also see that it's a tricky one to answer.
Finally, the four companies enjoined by this deal are likely to find they have certain conflicting or diverging notions when it comes to defining 'extremism' or, indeed, when deciding whether certain images are worth sharing to their combined database. Just this week, we saw Twitter claim it could delete Donald Trump's tweets if they fall within the boundaries of 'hate speech'; whereas Facebook promised to keep all of his posts available on its platform due to their newsworthiness. The application of community standards is varies in stringency between sites, and none are fully adherent to even their own. As Monday's press release claims, 'each company will independently determine what image and video hashed to contribute to the shared database.' In other words, it's not necessary for them to upload all things they deem inappropriate to the coalition database. It would be interesting to see just how far the companies are of a like mind...but of course, we can't really discern this; after all, they won't let us look through their windows.
Problems aside, however, the plan does look to be a net positive, at least in the short-term. In tackling the threats of global terror, the internet today has become as much a front line as are the spaces destroyed by such conflicts. Persuasion and propaganda are the bedrocks of radical ideologies, however one defines them, and it's generally agreed that the best way to counteract organisations like Daesh is to curtail their public voice. Facebook, Twitter, YouTube and Microsoft are charting a course through some very choppy waters; but charting a course they are nonetheless.
James has a Bachelor’s degree in History and
wrote his dissertation on beef and protest. His heroes list ranges from Adele
to Noam Chomsky: inspirations he’ll be invoking next year when he begins a
Master’s degree in London. Follow him @Songbird_James
Contact
us on Twitter,
on Facebook, or
leave your comments below. To find out about social media training or
management why not take a look at our website for more info: TheSMFGroup.com
Web Mega-Coalition to Create 'Terror Fingerprint' Countermeasures
Reviewed by Unknown
on
Thursday, December 08, 2016
Rating: