Home » New AI Can Distinguish True Conspiracies From Conspiracy Theories

New AI Can Distinguish True Conspiracies From Conspiracy Theories

Source : forbes

The audio on the otherwise shaky-body camera footage is unusually clear. As cops search a handcuffed man who moments before fired a shot inside a pizza parlour, an officer asks him-why he was there.

The man says to research a pedophile ring. The officer asks again. Another officer chimes in, “Pizzagate. He’s talking about Pizzagate.”

In that brief, interaction in 2016, it become clear that conspiracy theories, long relegated to the fringes of society, had moved into the real world in a very dangerous way.

Conspiracy theories which have the potential to cause significant harm, have found a welcome home on social media, where forums free-from moderation allow like-minded individuals to converse. There they may develop their theories & propose actions to counter-act the threats they “uncover.

But how are you able to tell if an emerging narrative on social media is an unfounded conspiracy theory? It seems that it’s possible to differentiate between conspiracy theories & true conspiracies by using machine learning tools to graph the elements & connections of a narrative. These tools could form the basis of an early warning system to alert authorities to online narratives that pose a threat in the real-world.

The culture analytics group at University of California, which Timothy R. Tangherlini & Vwani Roychowdhury lead, has developed an automatic approach to determining when conversations on social media reflect the telltale signs of conspiracy-theorizing.

They applied these methods successfully to the study of Pizzagate, the COVID-19 pandemic & anti-vaccination movements. They’re currently using these methods to study QAnon.

Collaboratively constructed, fast to form

Actual conspiracies are deliberately hidden, real-life actions of individuals working together for his or her own malign purposes. In contrast, conspiracy theories are collaboratively constructed & develop in the open.

Conspiracy theories are deliberately complex & reflect encompassing worldview. Rather than trying to elucidate one thing, a conspiracy theory tries to elucidate everything, discovering connections across domains of human interaction that are otherwise hidden mostly because they don’t exist.

While the popular image of the conspiracy theorist is of a loner piecing together puzzling connections with photographs & red string, that image not applies in the age-of social media. Conspiracy theorizing has moved online & is now the end product of a collective storytelling. The participants compute the parameters of a narrative framework: the people, places & things of a story & their relationships.

The online-nature of conspiracy theorizing provides a chance for researchers to trace the development of those theories from their origins as a series of often disjointed rumors & story pieces to a comprehensive narrative. For our work, Pizzagate presented the right subject.

Pizzagate began to develop in late October 2016 during the runup of presidential election. Within a month, it had been fully formed, with an entire cast of characters drawn from a series of otherwise unlinked domains: Democratic politics, the private lives of the Podesta brothers, casual family dining & satanic pedophilic trafficking.

The connecting narrative thread among these otherwise disparate domains was fanciful interpretation of the leaked emails of Democratic National Committee dumped by WikiLeaks in the final week of October 2016.

AI narrative analysis

They developed a model, a group of machine learning tools which will identify narratives based on sets of people, places & things & their relationships. Machine learning algorithms process huge amount of data to determine the categories of things in the data, then identify which categories particular things belong to.

They analyzed 17498 posts from April 2016 through February 2018 on the Reddit & 4chan forums where Pizzagate was discussed. The model treats every post as a fragment of a hidden story & sets close to uncover the narrative. The software identifies the people, places & things in the posts and determines which are major elements, which are minor elements & the way they’re all connected.

The model determines the main layers of the narrative, in the case of Pizzagate, Democratic politics, the Podesta brothers, casual dining, satanism & WikiLeaks and the way the layers come together to make the narrative as a whole.

To ensure that our methods produced accurate-output, they compared the narrative framework graph produced by their model with illustrations published in The New York Times. Their graph aligned with those illustrations and also offered finer levels of detail about the people, places & things & their relationships.

Sturdy truth, fragile fiction

To see, if we could distinguish between a conspiracy theory & an actual conspiracy, they examined Bridgegate, a political payback operation launched by staff members of Republican Government Chris Christie’s administration against the Democratic mayor of Fort Lee, New Jersey.

As they compared the results of their machine learning system using the 2 separate collections, 2 distinguishing features of a conspiracy theory’s narrative framework stood out.

First, while the narrative graph for Bridgegate took from 2013 to 2020 to develop, Pizzagate’s graph was fully formed & stable in a month. Second, Bridgegate’s graph survived having elements removed, implying that New Jersey politics would continue as a single, connected network albeit key figures & relationships from the scandal were deleted.

The Pizzagate graph in contrast, was easily fractured into smaller subgraphs. Once they removed the people, places, things & relationships that came directly from the interpretations of the WikiLeaks emails, the graph fell apart into what actually were the unconnected domains of politics, casual dining, the private lives of the Podestas & the odd world of satanism.

Early warning system?

There are clear ethical challenges that their work raises. Our methods, as an example, might be used to generate additional posts to a conspiracy theory discussion that fit the narrative framework at the basis of the discussion. Similarly, given any set of domains, someone could use tool to develop a completely new conspiracy theory.

However, this weaponization of storytelling is already occurring without automatic methods, as their study of social media forums makes clear. There’s a role for the research community to help others understand how that weaponization occurs and to develop tools for people & organizations who protect public safety & democratic institutions.

Developing an early warning system that tracks the emergence & alignment of conspiracy theory narratives could alert researchers & authorities, to real world actions people might based on these narratives.

Perhaps with such a system in place, the arresting officer in the Pizzagate case wouldn’t have baffled by the gunman’s response when asked why he’d shown-up at a pizza parlour armed with an AR-15 rifle.

Leave a Reply

Your email address will not be published. Required fields are marked *