AI Mechanism Claims to Detect Disinformation With 96 Percent Accuracy, Even Trace Its Source

Europe

A team at the MIT Lincoln Laboratory’s Artificial Intelligence Software Architectures and Algorithms Group attempted to better understand disinformation campaigns and also aimed to create a mechanism to detect them. The objective of the Reconnaissance of Influence Operations (RIO) programme was also to ensure the ones spreading this misinformation on social media platforms are identified. The team published a paper earlier this year in the Proceedings of the National Academy of Sciences and was honoured with an R&D 100 award as well.

The work on the project first began in 2014 and the team noticed increased and unusual activity in social media data from accounts that had the appearance of pushing pro-Russian narratives. Steve Smith, a staff member at the lab and a member of the team, told MIT News that they were “kind of scratching our heads.”

And then just before the 2017 French Elections, the team launched the programme to check if similar techniques would be put to use. Thirty days leading up to the polls, the RIO team collected real-time social media data to analyse the spread of disinformation. They compiled a total of 28 million tweets from 1 million accounts on the micro-blogging site. Using the RIO mechanism, the team was able to detect disinformation accounts with 96 percent precision.

The system also combines multiple analytics techniques and creates a comprehensive view of where and how the disinformation is spreading.

Edward Kao, another member of the research team, said that earlier if people wanted to know who was more influential, they just looked at activity counts. “What we found is that in many cases this is not sufficient. It doesn’t actually tell you the impact of the accounts on the social network,” MIT News quoted Kao as saying. 

Kao developed a statistical approach, which is now used in RIO, to discover if a social media account is spreading disinformation as well as how much it causes the network as a whole to change and amplify the message.

Another research team member, Erika Mackin, applied a new machine learning approach that helps RIO to classify these accounts by looking into data related to behaviours. It focusses on factors such as the account’s interactions with foreign media and the languages it uses. But here comes one of the most unique and effective uses of the RIO. It even detects and quantifies the impact of accounts operated by both bots and humans, unlike most of the other systems that detect bots only.

The team at the MIT lab hopes the RIO is used by the government, industry, social media as well as conventional media such as newspapers and TV. “Defending against disinformation is not only a matter of national security but also about protecting democracy,” Kao said.


It’s an all television spectacular this week on Orbital, the Gadgets 360 podcast, as we discuss 8K, screen sizes, QLED and mini-LED panels — and offer some buying advice. Orbital is available on Apple Podcasts, Google Podcasts, Spotify, Amazon Music and wherever you get your podcasts.

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *