Google Thwarts Toxic Online Trolls With ‘Perspective’ AI Tool
Perspective was developed by Jigsaw and Google’s Counter Abuse Technology. The ultimate goal of Perspective is to determine “the perceived impact a comment might have on a conversation”. Google hopes that Perspective will give commenters feedback, allow readers to find the most relevant content, and help moderators to do their job. This is only the first of Google’s promised machine learning models.
Publishers will ultimately decide how to use the scale. Some websites may use crowdsourcing methods to flag offensive content, while others may just outright clean up their own comment sections. Some, on the other hand, have expressed concern that the tool may violate the First Amendment. Jared Cohen, president of Jigsaw, responded that the tool’s purpose was to merely eliminate “low-hanging fruit”.
How did Jigsaw and Google determine what comments were “toxic”? They define toxic as “a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.” The team mined millions of comments from Wikipedia and the New York Times and hired several thousand people to rate the toxicity of the comments. People were asked to rate internet comments on a scale from "Very toxic" to "Very healthy".
Participants were also asked to decide whether the comments were a personal attack. The Jigsaw/Google team decided to ultimately eliminate the “personal attack” category because its parameters fostered so much disagreement. Some people interpret “I disagree with you” as an attack, while others are more offended by statements like, “your mother was a hamster and your father smelt of elderberries.”
Anyone who is interested can actually go to the Perspective website and test out how toxic their own comments are. The phrase “f*** o**” unsurprisingly scored a 99% on the toxic scale, however, a blank space scored a confusing 12%. According to the creators, “It’s still early days and we will get a lot of things wrong.”