The rise of abusive remarks in online comments sections has prompted some publishers to do away with the comments feature altogether. But Google said its technology incubator has developed an alternative solution: a troll filter dubbed Perspective that's powered by machine learning.

Launched today by Google Jigsaw, Perspective is an application program interface (API) built on human reviewer ratings of hundreds of thousands of online comments. Starting from this human-determined baseline of what makes comments "toxic," Perspective then learns to automatically identify similar comments so they can be flagged by online publishers.

Sites can use the Jigsaw tool in a variety of ways. For instance, Perspective can be used to flag questionable comments so human moderators can review them and decide whether or not to publish them. Google said publishers can also use Perspective to let readers view the potential impacts of their comments as they write them, or to sort comments by toxicity to give preference to less abusive remarks.

Human Moderation Takes 'Money, Labor, Time'

Online harassment is a widespread problem, with nearly half of Internet users in the U.S. having personally experienced such treatment, Jigsaw president Jared Cohen wrote in a blog post today. Citing a 2016 study by the Data & Society Research Institute, Cohen noted that almost one-third of Americans reported that they had self-censored their online comments to avoid abusive responses.

"This problem doesn't just impact online readers," Cohen said. "News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn't the solution they want. We think technology can help."

Perspective, currently an "early-stage technology," was developed by Jigsaw in partnership with Wikipedia, The New York Times, The Economist and The Guardian. Cohen said the tech is already being tested by The New York Times, which today employs an entire team of human moderators to review an average of 11,000 online comments a day.

Open Sourcing Models and Data

Last year, another Jigsaw partner, the U.K.'s Guardian newspaper, analyzed 70 million comments that had been made in response to online articles published between January 1999 and March 2016. The paper found that 1.4 million of those -- about 2 percent -- were blocked for being abusive or "so off-topic that they derail the conversation."

The Guardian analysis also found that the opinion writers most likely to encounter online abuse were women or people of color. Comments designated as abusive included those involving threats of physical harm, ad hominem attacks and demeaning or insulting speech.

In his blog post today, Cohen said that Jigsaw will continue working on Perspective to expand its capabilities. "Our first model is designed to spot toxic language but over the next year we're keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic."

Jigsaw is also making Perspective's experiments, models and research data open source to "explore the strengths and weaknesses (e.g., potential unintended biases) of using machine learning as a tool for online discussion."