“Don’t read the comments section” is a modern rule of journalism, but increasingly applies to readers as well. Imagine trying to discuss latest developments in domestic politics, foreign affairs or sports over coffee with a friend — except every two minutes random people came and interrupted your conversation with personal insults or offensive slurs. Toxic comments are frustrating for news providers and readers alike, and the problem is constantly getting worse.
Jigsaw is a part of Alphabet, the parent company of Google. It works on tools to combat online extremism, end human trafficking and track cybercrime. One of its latest projects is Perspective, a new tool designed to make the online comments section safe again.
“The Internet is proving to be one of the most powerful amplifiers of speech every invented,” Vint Cerf wrote for the Internet Society back in 1999. “It invites and facilitates multiple points of view and dialog in ways unimplementable by the traditional, one-way, mass media.” He was absolutely correct: the sheer volume and range of media available online is breathtaking compared to the pre-internet world, as is the option to comment on what you’ve just read.
But he also foresaw the downside. The internet won’t be for everyone, he warned, if we are not “mindful of the rights of others.” For those who abuse the privileges brought about new technology, he wrote, “let us dedicate ourselves to developing the necessary tools to combat the abuse and punish the abuser.”
Easier said than done. The Observer’s Readers’ Editor bemoaned the problem, beginning one column last year with a lament that “holocaust denial, rampant misogyny and deeply questionable attitudes to underage sex all made brief, unwelcome appearances” in the comments section. The New York Times reviews comments manually, meaning only about 10% of their articles each day are open for readers’ responses. And Vice.com simply gave up and closed online commenting after watching web comments “devolve into racist, misogynistic maelstroms where the loudest, most offensive, and stupidest opinions get pushed to the top.”
The statistics match the anecdotes: according to the Data & Society Research Institute, 72% of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online, fearing retribution. That’s not an internet for everyone — so the developers at Jigsaw decided it was time for some perspective.
Perspective is an API — basically a program that can be added onto any news outlet’s existing content system — that uses machine learning to spot abuse and harassment online. It uses artificial intelligence to ensure debate stays free of hate speech. The program works by analysing comments and giving them a score based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to do this, Perspective examined hundreds of thousands of comments that had been labeled as offensive by human comment section moderators. Perspective learns: as it sees new examples of potentially toxic comments, or is corrected by users, it applies that wisdom to future below-the-line comments.
“This technology is still developing,” wrote Jigsaw President Jared Cohen in a blog post. “When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.”
Publishers can add Perspective to their web pages or apps and then choose what to do with the information. They could flag comments for their own, human, moderators to review and decide whether to include them in a conversation. Or they can enable it to work in real time, so an internet user typing “your a stupid idiot” into the comment box will be alerted to the potential toxicity of what they are writing. Publishers could also opt to leave all comments displayed and allow readers to sort comments by toxicity themselves, making the cream of the online discussion rise to the top.
Jigsaw’s team have been working with no less a publication than the New York Times to trial Perspective. Its system of manual reviewing of comments means that even with human moderators working all day worldwide, they can only allow comments for around 10% of articles. Using Perspective’s powers of AI, they can group similar comments together to review them in a groups; the next stage is testing, with a plan to share open-source results of the collaboration by the end of this year.
So it’s just the beginning. Perspective is starting by using AI to identify the pernicious influence of toxic comments, but it will grow as it learns. “Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic,” wrote Cohen. “In the long run… we hope we can help improve conversations online.”