New York: Scientists have developed new machine learning algorithms which can successfully identify bullies and aggressors on Twitter with 90 per cent accuracy. Effective tools for detecting harmful actions on social media are scarce, as this type of behaviour is often ambiguous in nature and exhibited via seemingly superficial comments and criticisms, said researchers from Binghamton University in the US.
The study analysed the behavioral patterns exhibited by abusive Twitter users and their differences from other Twitter users. “We built crawlers — programmes that collect data from Twitter via variety of mechanisms,” said Binghamton University computer scientist Jeremy Blackburn. “We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them,” Blackburn said.
The researchers then performed natural language processing and sentiment analysis on the tweets themselves, as well as a variety of social network analyses on the connections between users. The researchers developed algorithms to automatically classify two specific types of offensive online behaviour, ie, cyberbullying and cyberaggression. The algorithms were able to identify abusive users on Twitter with 90 per cent accuracy, researchers said. These are users who engage in harassing behaviour, e.g. those who send death threats or make racist remarks to users.
“In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,” said Blackburn.