9/7/2021 Jenny Applequist, Coordinated Science Lab
Social media misinformation campaigns have become a worrisome feature of the modern threat landscape, and the Defense Advanced Research Projects Agency (DARPA) has just awarded $5.8 million to a team, led by CSL’s Tarek Abdelzaher, that will work to characterize how different foreign populations respond to influence campaigns.
Written by Jenny Applequist, Coordinated Science Lab
There’s no end to the variety of bizarre rumors circulating about COVID-19 vaccines: recipients’ bodies become magnetized, perhaps, or connected to 5G signals. Many assume that such tales are cooked up by eccentrics, but some of the rumor-mongering has more sinister origins. In August 2021, for example, Facebook uncovered a huge, Russia-based anti-vaccination campaign, in which hundreds of fake accounts were working in coordination to spread the belief that people who received the AstraZeneca vaccine were being tainted by injected chimpanzee tissue.
Such misinformation campaigns have become a worrisome feature of the modern threat landscape, and the Defense Advanced Research Projects Agency (DARPA) has just awarded $5.8 million to a team, led by CSL’s Tarek Abdelzaher, that will work to characterize how different foreign populations respond to influence campaigns as a first step towards development of effective countermeasures. This work is a part of DARPA’s INfluence Campaign Awareness and Sensemaking (INCAS) program.
Abdelzaher, who is a Sohaib and Sara Abbasi Professor and a Willett Faculty Scholar in Computer Science at the University of Illinois, explains that humanity has a lot of hard-won knowledge of how physical weapons work, and how to employ countermeasures against them. “But today,” he says, “information is weaponized. People create narratives that are divisive. They are intended to polarize, radicalize, whatever... But we don’t understand the impact of that weapon on the population, the way we understand the impact of a bomb or a lightning strike.”
The team will work to develop tools that, first, can detect when an attack is happening in the information space, and second, can understand the impact of the attack. What is it trying to accomplish, what is its audience, and what influence technique is it using? How will it affect the population, or specific subsets of the population? How will they respond?
Abdelzaher points out that modeling of social systems has challenges beyond those involved in modeling physical systems. “Let’s say you have an autonomous car or a drone, and you want to predict its trajectory. We have theory behind it; we have laws of nature behind it.” But what “laws” can describe social systems?
That’s where the project’s experts on social psychology will step in. They will advise on the rules of psychology that govern human behavior, and develop answers to questions about how a population that’s characterized by certain psychological traits, or certain moral values, is likely to respond to a particular type of campaign.
Jesse Graham, who is the George S. Eccles Chair in Business Ethics at the University of Utah, is one of those experts. He’s one of the originators of “moral foundations theory,” which describes moral reasoning in terms of the interplay among a small set of foundational values, such as loyalty and fairness. That theory will be one of the starting points for the new project. He explains, “We’re trying to have some sort of taxonomy, or some sort of model, of what are the kinds of moral intuitions people have, [and] why do we have them.”
“It feels like [this is] a really important topic, understanding what kinds of influence campaigns are happening out there in the world of social media, and trying to predict how people will respond to them,” Graham says. “It’s a very, very big problem. But I think it’s a tractable one, that we can actually get some useful information out of.”
The new DARPA project is entitled “Analytics of Information Influence: Effect Characterization.” In addition to Abdelzaher and Graham, the team includes faculty members Jiawei Han, an expert in data mining; Heng Ji, an expert in natural language processing; and Hanghang Tong, an AI expert (all of Computer Science, UIUC); Boleslaw Szymanski (Computer Science and Physics, Rensselaer Polytechnic Institute); and Morteza Dehghani (Psychology and Computer Science, the University of Southern California).