Instagram Have Introduced Machine Learning Filters to Moderate Comments
Digital Trends |
Previously, Instagram's comment filters could only recognise offensive language which was on a predetermined list, and although users could add their own words and phrases to the list (only applicable to comments on their posts), it was a pretty basic system. With this upgrade, the filters will be continue to learn new offensive words and phrases, as well as identifying what's offensive from context, as well as language.
Additionally, the moderation acts as a kind of closed circuit for the perpetrators - if someone's comment gets removed, they'll still be able to see it, but only them. In that sense, commenters will have no idea that their comment has been removed, and simply think that nobody is acknowledging it. Twitter have been playing around with a similar method for a while now.
Additionally, the filter is being changed from an opt-in feature to a default one, something which may spark a certain amount of controversy. It can still be switched off from within settings, and the custom blocking service is still available, but for obvious reasons it can't operate using the new, more sophisticated system. Currently the system only works in English, but Instagram are in the process of translating it into other languages.
The spam filter has actually been up and running in a more simplified guise for about eight months now with little fanfare. To improve it, Instagram tasked a human team to shift through reams of spam comments to create comprehensive database of key words, phrases and other tells. Instagram still have yet to find an effective way of dealing with spam accounts, but this is certainly a step in the right direction.
With offensive comments, one of the main aims is to reduce false positives. Both Instagram and Facebook have been called out in the past for flagging innocent content as inappropriate, and in many cases it's been because the algorithms haven't been clever enough to differentiate between something that's offensive and something which merely contains a term which would be offensive in other contexts. Same rule applies to images. For this reason, this new AI watchdog is built to have a 1% margin of error, but we won't know exactly how aggressive or lax the system really is until it launches proper.
Callum is a film school graduate who is now making a name for himself as a journalist and content writer. His vices include flat whites and 90s hip-hop. Follow him @Songbird_Callum
Contact us on Twitter, on Facebook, or leave your comments below. To find out about social media training or management why not take a look at our website for more info: TheSMFGroup.com
Instagram Have Introduced Machine Learning Filters to Moderate Comments
Reviewed by Unknown
on
Monday, July 03, 2017
Rating: