Friday, 24 February 2017

Google Launches Robo-Tool to Flag Hate Speech Online

Imagine taking part in a conversation online and being called an "idiot." That is practically equivalent to first-degree murder these days, so Google has stepped in with a solution to prevent people from getting their feelings hurt and having suicidal thoughts. "Perspective" functions by ranking comments on how "toxic" they are and categorizing them appropriately so sensitive types can be warned about discussions that dare to exhibit brutal honesty. Perspective helps to filter abusive comments more quickly for human review. The algorithm was trained on hundreds of thousands of user comments that had been labelled as "toxic" by human reviewers, on sites such as Wikipedia and the New York Times. It works by scoring online comments based on how similar they are to comments tagged as "toxic" or likely to make someone leave a conversation. "All of us are familiar with increased toxicity around comments in online conversations," Mr. Cohen said. "People are leaving conversations because of this, and we want to empower publications to get those people back." Discussion

Read the full article here by [H]ardOCP News/Article Feed

No comments: