New AI algorithms can spot online trolls: Study
Team Udayavani, Jan 9, 2020, 3:15 PM IST
Los Angeles: New artificial intelligence (AI) algorithms can monitor online social media conversations as they evolve, which could lead to an effective and automated way to spot online trolling in the future, according to researchers, including those of Indian-origin.
Prevention of online harassment requires rapid detection of offensive, harassing, and negative social media posts, which in turn requires monitoring online interactions.
Current methods to obtain such social media data are either fully automated, and not interpretable or rely on a static set of keywords, which can quickly become outdated.
Neither method is very effective, according to Maya Srikanth, from California Institute of Technology (Caltech) in the US.
“It isn’t scalable to have humans try to do this work by hand, and those humans are potentially biased,” Srikanth said.
“On the other hand, keyword searching suffers from the speed at which online conversations evolve. New terms crop up and old terms change meaning, so a keyword that was used sincerely one day might be meant sarcastically the next,” she said.
The team, including Anima Anandkumar from Caltech, used GloVe (Global Vectors for Word Representation) model that uses machine-learning algorithms to discover new and relevant keywords.
Machine learning is an application of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
GloVe is a word-embedding model, meaning that it represents words in a vector space, where the “distance” between two words is a measure of their linguistic or semantic similarity.
Starting with one keyword, this model can be used to find others that are closely related to that word to reveal clusters of relevant terms that are actually in use.
For example, searching Twitter for uses of “MeToo” in conversations yielded clusters of related hashtags like “SupportSurvivors,” “ImWithHer,” and “NotSilent.”
This approach gives researchers a dynamic and ever-evolving keyword set to search.
However, it is not enough just to know whether a certain conversation is related to the topic of interest; context matters, the researchers said.
For that, GloVe shows the extent to which certain keywords are related, providing input on how they are being used.
For example, in an online Reddit forum dedicated to misogyny, the word “female” was used in close association with the words “sexual,” “negative,” and “intercourse.”
The project was a proof-of-concept aimed at one day giving social media platforms a more powerful tool to spot online harassment, the researchers said.
“The field of AI research is becoming more inclusive, but there are always people who resist change,” said Anandkumar.
“Hopefully, the tools we’re developing now will help fight all kinds of harassment in the future,” she said.
The research was presented on December 14 last year at the AI for Social Good workshop at the Conference on Neural Information Processing Systems in Vancouver, Canada.
Udayavani is now on Telegram. Click here to join our channel and stay updated with the latest news.
Top News
Related Articles More
ISRO to study how crops grow in space on PSLV-C60 mission
ISRO & ESA agree to cooperate on astronaut training, mission implementation
Snatcher lands in police net in Delhi, AI tech helps reveal identity
AI Meets Health: The Rise of Smart Fitness Solutions
Power Up by Powering Down: 10 Energy-Saving Tips for Every Home
MUST WATCH
Latest Additions
Kannada Sahitya Sammelana: Food distribution creates stir
Rohit gets hit in nets, practice pitches on slower side
India & Kuwait elevate ties to strategic level; ink defence pact after PM Modi meets top Kuwaiti leaders
In Kuwait, PM Modi meets yoga practitioner, other influencers from Gulf country
Notorious gangster wanted in UAPA case arrested at Nepal border
Thanks for visiting Udayavani
You seem to have an Ad Blocker on.
To continue reading, please turn it off or whitelist Udayavani.