Disinformation, Misinformation, and Malinformation

Please find below a bipartisan report by the Senate Intelligence Committee, an OpEd about Italy that has become a haven for Russian disinformation and propaganda, a profile of Mykhailo Fedorov (Ukraine’s Minister of Digital Transformation), a Reading List, an introduction to the Hamilton 2.0 Dashboard (a project of the Alliance for Securing Democracy at the German Marshall Fund of the United States), a link to Drowning in Disinformation: How homegrown state-sponsored disinformation threatens EU democracy”, from the Brussels-based Heinrich Böll Stiftung, an introduction to the EU Special Committee on Foreign Interference, some information about Russia’s “Ministry of Truth”, an article about the shady “Reputation Management Industry”, as well as some Resources that can help you to combat the challenge of disinformation, misinformation, and malinformation, to gain greater awareness of the media ecosystem, and to become a savvy information media consumer.

Disinformation Fabricated or deliberately manipulated audio/visual content.  Intentionally created conspiracy theories or rumors. It is motivated by three distinct factors: to make money; to have political influence, either foreign or domestic; or to cause trouble for the sake of it. When disinformation is shared it often turns into misinformation.

Misinformation Unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or when satire is taken seriously. The person sharing doesn’t realize that it is false or misleading. Often a piece of disinformation is picked up by someone who doesn’t realise it’s false, and shares it with their networks, believing that they are helping. The sharing of misinformation is driven by socio- psychological factors. Online, people perform their identities. They want to feel connected to their “tribe”, whether that means members of the same political party, parents that don’t vaccinate their children, activists who are concerned about climate change, or those who belong to a certain religion, race or ethnic group.

Malinformation Deliberate publication of private information for personal or corporate rather than public interest, such as revenge porn.  Deliberate change of context, date or time of genuine content with an intent to cause harm. 



U.S. counterintelligence efforts are failing to keep pace with espionage, hacking and disinformation threats, according to a Senate report released on September 20, 2022. The bipartisan report by the Senate Intelligence Committee says that U.S. spy agencies are poorly equipped to combat threats from major powers such as China, transnational criminal organizations and ideologically motivated groups. These varied groups target not just U.S. national security agencies, but also other government departments, the private sector and academia in search of secret or sensitive data. The report, which is partially redacted, focuses on the little-known National Counterintelligence and Security Center, whose mission is to lead counterintelligence across the U.S. government. The center doesn’t have sufficient funding or authority, nor a clear mission, the Senate report says. 



Italy has become a haven for Russian disinformation and propaganda since the invasion of Ukraine, according to a September 14, 2022 Op-Ed, published by the American Enterprise Institute  (AEI). The author notes that despite bans on Russian channels RT and Sputnik, Russia’s propagandists have made a home for themselves in Italian media and significantly influenced public opinion in that country. Twitter, likewise, has proven to be an amplifying force for pro-Russian propaganda in Italy. Even well-known apologists for the Kremlin serve in the European parliament. With elections slated for September 25, the ground in Italy is clearly fertile for yet a greater swing toward Moscow. 



Since Russia invaded his country, Ukrainian president Volodymyr Zelensky has used social media to speak directly to his citizens and to those abroad, using the small screen to forge connection and intimacy, building sympathy and solidarity. It’s the same approach he took to campaigning for the presidency in 2019 — instead of press conferences and rallies, Zelensky tapped Instagram and Facebook. The mastermind behind Zelensky’s social media approach was Mykhailo Fedorov, a young digital marketing expert who is now Ukraine’s minister of digital transformation, tasked with using technology to unblock Ukraine’s famously inefficient post-Soviet bureaucracy. With his typical duties upended by the war, he’s become Ukraine’s ambassador to the tech community, pressuring big tech to cut Russia off, convincing Elon Musk to help keep the country online, and rallying volunteers for cyber defense.
Rest of World set out to find 100 of the most influential, innovative, and trailblazing personalities in fintech, e-commerce, policy, digital infrastructure, and a range of other sectors that intersect with and influence technology for their inaugural RoW100: Global Tech’s Changemakers. Mykhailo Fedorov is one of them.



Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics
By Yochai Benkler, Robert Faris and Hal Roberts 

The Reality Game
By Samuel Woolley 

Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It
By Richard Stengel 

This Is Not Propaganda: Adventures in the War Against Reality
By Peter Pomerantsev

I Am a Troll: Inside the Secret World of the BJP’s Digital Army
By Swati Chaturvedi

Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives
By Philip N Howard 

Active Measures: The Secret History of Disinformation and Political Warfare
By Thomas Rid

Disinformation, Misinformation and Fake News in Social Media: Emerging Research Challenges and Opportunities
By Kai Shu, Suhang Wang, Dongwon Lee, and Huan Liu

Verification Handbook for Disinformation and Media Manipulation
Various contributors. Available for free.

LikeWar: The Weaponisation of Social Media
By PW Singer and Emerson T Brooking

Striking Back: Overt and Covert Options to Combat Russian Disinformation
By Thomas Kent 



The Hamilton 2.0 dashboard, a project of the Alliance for Securing Democracy at the German Marshall Fund of the United States, provides a summary analysis of the narratives and topics promoted by Russian, Chinese, and Iranian government officials and state-funded media on Twitter, YouTube, state-sponsored news websites, and via official diplomatic statements at the United Nations. (NOTE — there currently are no UN statements or YouTube data for Iran). The dashboard also includes a small collection of global media Twitter accounts for comparative purposes. 


How homegrown state-sponsored disinformation threatens EU democracy

According to Heinrich Böll Stiftung’s Dossier, disinformation can potentially reach a vast audience, largely due to social networks’ recommendation algorithms. Discussions on disinformation in the EU often focus on external state actors such as Russia and China, or on conspiracy theorists and hate groups. Domestic sources of disinformation such as political parties and national governments, however, are often overlooked. Increasingly centralised state-media empires, threaten European media pluralism, as do attacks against independent media. A resilient democracy needs a free flow of information and multiple competing narratives, and therefore it is imperative to address trends contrary to this principle.
With a mosaic of insights from civil society actors in different EU Member States we will shed light on various facets of the phenomenon of state-sponsored disinformation emanating from within the EU itself. Our goal is to encourage debate and offer concrete ideas about addressing the problem, particularly considering the updated EU Code of Practice on Disinformation and the Digital Services Act. Disinformation has the power to promote negative narratives about certain groups and civil society in general. It discredits the work of NGOs and academia, who are frequently targets for online attacks and campaigns that spread false information.



Russia’s “Ministry of Truth” / The Russian government has seen fake news and rumors as an imminent threat to be eliminated by the State for much of its history. From 1935-1953, Soviet citizens were severely punished for spreading rumors: only after Stalin’s death did those prosecutions mostly stop. Now, almost 70 years later, this practice has been reborn in a different way. At the beginning of the COVID-19 pandemic, Russian authorities started to use two new laws against people who spread fake news about the epidemic. But who was punished, and why does one type of rumor become a matter of investigation and others, like those about anti-vaccination do not? George F. Kennan scholar Alexandra Arkhipova examined this selective policy against misinformation in her lecture (Wilson Center Event on January 24, 2022: “A “Ministry of Truth” in 2021: Fighting Fake News the Old-Fashioned Way”). 



SOURCE: CSET (the Center for Security and Emerging Technology) / APRIL 21, 2022

Ukrainian officials are reportedly using facial recognition technology to identify dead Russian soldiers. As we covered last month, Clearview AI — the U.S. facial recognition company that has cultivated a vast database of more than 10 billion facial images, many of them scraped from social media sites — offered its tool to the Ukrainian government, free of charge. More than 340 Ukrainian officials across five government agencies have reportedly used Clearview’s tool to run more than 8,600 facial recognition searches. According to The Washington Post’s Drew Harwell, those searches have been used to identify more than 580 dead Russian soldiers, enabling volunteers in the Ukrainian “IT Army” to notify the families of the deceased and, Ukrainian officials say, dispel the Russian narrative of a limited “special operation.” The practice has unnerved some observers — London-based researcher Stephanie Hare told Harwell that contacting soldiers’ families was “classic psychological warfare.” That would track with AI’s use throughout the war — concerns abound about potential “killer robots,” but AI has proven more potent as a psychological tool.



At the start of the Digital Forensic Research Lab’s (DFRLab) 2022 360/Open Summit conference in Brussels on June 6 and 7, members of the DFRLab previewed an upcoming research report on Ukraine that involved reviewing more than three thousand fact checks and five hundred pro-Kremlin Telegram channels to gain a deeper understanding of how the Russian information space operates. DFRLab Research Associate Roman Osadchuk spoke about the “avalanche of disinformation” coming from Russia that seeks to dehumanize Ukrainians. Alongside Russia’s invasion of Ukraine, a torrent of false narratives were used to justify the war, including falsely equating Ukrainians with Nazis, accusing Ukraine of having “biolabs” capable of building “dirty nuclear bombs,” and blaming the United States and NATO for provoking the war. More in Atlantic Council’s 360/OS (June the 7th, 2022).



Pro-Kremlin narratives constantly contradict each other, but most of its consumers don’t care, according to CEPA. The Center for European Policy Analysis is based in Washington, DC, and their mission is to ensure a strong and enduring transatlantic alliance rooted in democratic values and principles with strategic vision, foresight, and policy impact. Through cutting-edge research, analysis, and programs they provide fresh insight on energy, security, and defense to government officials and agencies.

Poster below: “The defence of the motherland is a sacred duty of every citizen of the USSR.” This Soviet poster published in 1961 is showing an officer passing a Kalashnikov to a younger soldier. The poster was designed by R. Dementiev. Source: CEPA



An article, about a state-sponsored, Covid-19-related influence campaign in Canada, directed by the Russian government to increase sales of Russian vaccines, was published in The Strand on January 19, 2022. The Strand has been the newspaper of record for Victoria University since 1953. It is published 12 times a year with a circulation of 1200 and is distributed in Victoria University buildings and across the University of Toronto’s St. George campus. The Strand flagrantly enjoys its editorial autonomy and is committed to acting as an agent of constructive social change.


War has many forms, it is not only manifested by the presence of enemy troops in foreign territory. This conflict is also taking place in Slovakia, in public space and on social networks, where the public has been massively and systematically massaged by pro-Russian propaganda. What Russian aggression really mean in Slovakia writes (in Slovak) a Senior Research Fellow, Centre for Democracy & Resilience. Within the Democracy & Resilience Programme the author analyses malign efforts to undermine democratic societies, as well as strategies, tools and actors involved in influence operations. She has led research projects focusing on the impact of disinformation campaigns on electoral processes in Europe, public opinion poll surveys and societal vulnerabilities towards information manipulation. Among her further interests are strategic communication and digital platforms’ regulation. She is a review board member of the konspiratori.sk project (a public database of websites that provide dubious, deceptive, fraudulent, conspiratorial, or propaganda content, and advocating for a more transparent information environment by defunding disinformation sites).


Russian state-funded and state-directed media outlets RT and Sputnik are critical elements in Russia’s disinformation and propaganda ecosystem. In an August 2020 report, the Global Engagement Center outlined the five pillars of Russia’s disinformation and propaganda ecosystem. RT (the Kremlin-funded international broadcasting network Russia Todayand Sputnik (the main foreign-facing project of Rossiya Segodnya, an international news agency created in late 2013 by a presidential executive order to restructure Russian state mediaare key state-funded and directed global messengers within this ecosystem, using the guise of conventional international media outlets to provide disinformation and propaganda support for the Kremlin’s foreign policy objectives. RT and Sputnik also interact with other pillars of the ecosystem by amplifying content from Kremlin and Kremlin-aligned proxy sites (some of which are connected to Russian intelligence), weaponizing social media, and promoting cyber-enabled disinformation.
RT and Sputnik’s role as disinformation and propaganda outlets is most obvious when they report on issues of political importance to the Kremlin. A prevalent example is Russia’s use of RT and Sputnik to attempt to change public opinions about Ukraine in Europe, the United States, and as far away as Latin America. When factual reporting on major foreign policy priorities is not favorable, Russia uses state-funded international media outlets to inject pro-Kremlin disinformation and propaganda into the information environment.
The US Department of State’s Global Engagement Centre has published a Special Report in January 2022: Kremlin-Funded Media: RT and Sputnik’s Role in Russia’s Disinformation and Propaganda Ecosystem


Russian authorities passed new restrictive legislation and blocked local and international media outlets and social media platforms in order to censor information about the war. The move plunges Russia into “an information dark age,” said CPJ Executive Director Robert Mahoney. A local journalist told Mahoney, “The Russian media is dead.” Watch Brian Stelter’s interview with Mahoney on CNN’s Reliable Sources show hereCPJ spoke to journalists covering the war about the situation in both Ukraine and Russia. Sevgil Musaieva, chief editor of independent Ukrainian news siteUkrayinska Pravda, spoke to CPJ about the risks she and her team in Ukraine face covering the invasion; Dozhd TV editor Tikhon Dzyadko shared why he fled Russia and shut his broadcaster down; and Russian journalist Irina Borogan spoke to CPJ from exile about Moscow’s censorship of the Ukraine invasion. At least 150 Russian journalists are believed to have left the country in the face of Moscow’s crackdown on independent Russian media.


The fast-paced online coverage of the Russian invasion of Ukraine on Wednesday followed a pattern that’s become familiar in other recent crises around the world. Photos, videos, and posts are shared across platforms much faster than they can be verified. The result is that falsehoods are mistaken for truth and amplified, even by well-intentioned people. This can help bad actors to terrorize innocent civilians or advance disturbing ideologies, causing real harm. Already, bad information about the Russian invasion has found large audiences on platforms fundamentally designed to promote content that gets engagement.
What can you do to avoid becoming part of the problem? Abby Ohlheiser, a senior editor at MIT Technology Review,  has some really useful tips.



On the 25th of January, 2022, MEPs finalized 18 months of inquiry by Special Committee on Foreign Interference  (INGE), and adopted its final recommendations. The committee adopted the report with 25 votes, eight against and one abstention. The European public and government officials are “overwhelmingly” unaware of the severity of the threat posed by foreign autocratic regimes, in particular Russia and China, MEPs say in the text. Insufficient defence made it easier for malicious actors to take over critical infrastructure, carry out cyber-attacks, recruit former senior politicians and propagate polarisation in the public debate. This is exacerbated by loopholes in legislation and not enough coordination between EU countries.

Counteraction / To counter the threats, INGE members urge the EU to raise public awareness through training for people in sensitive functions and general information campaigns. In addition, the EU should beef up its capabilities and build a sanctions regime against disinformation. Rules on social media platforms, which serve as vehicles for foreign interference, have to be tightened, too.
In addition, the committee recommended the following:

  • support broadly distributed, pluralistic media and fact-checkers;
  • make online platforms invest in language skills to be able to act on illegal and harmful content in all EU languages;
  • treat digital election infrastructure as critical;
  • provide financing alternatives to Chinese foreign direct investment used as geopolitical tool;
  • clarify “highly inappropriate” relations between certain European political parties and Russia;
  • ban foreign funding of European political parties;
  • urgently improve cybersecurity, classify and register surveillance software such as Pegasus as illegal and ban their use; and
  • make it harder for foreign actors to recruit former top politicians too soon after they have left their job.

See also EUvsDisinfo. EUvsDisinfo is the flagship project of the European External Action Service’s East StratCom Task Force. It was established in 2015 to better forecast, address, and respond to the Russian Federation’s ongoing disinformation campaigns affecting the European Union, its Member States, and countries in the shared neighbourhood. EUvsDisinfo’s core objective is to increase public awareness and understanding of the Kremlin’s disinformation operations, and to help citizens in Europe and beyond develop resistance to digital information and media manipulation.



Illustration by Daniel Zender for Rest of the World

An excellent article (“Exposed documents reveal how the powerful clean up their digital past using a reputation laundering firm”), published by Rest of World, an international nonprofit journalism organization, about how the wealthy scrub their past from the internet, shed light on the “reputation management industry”, revealing how Eliminalia and companies like it use spurious copyright claims and fake legal notices* to remove and obscure articles linking clients to allegations of tax avoidance, corruption, and drug trafficking.

One of these fake emails, was sent to KPN and XS4ALL in the Netherlands in February 2021. The mail was sent to their abuse department from the domain abuse-report{.}eu claiming to be from the Legal Department of the Brussels EU Commission:

The mail was sent by “Raul Soto” including the address of a Regus Office in Brussels. The Office name is “Regus Brussels EU Commission” as it happens to be in front of one of the EU Commission’s buildings (sneaky!) The most interesting part of this e-mail, is the information that can be extracted from the mail headers, which includes the path that the mail took from sender to receiver. The headers show that the fake mail was sent from an Ukrainian IP address: 62.244.51{.}52 using a server from OVH in France.
Related articles are here, here, and here, and the link to the  Lumen database (the database collects and analyzes legal complaints and requests for removal of online materials, helping Internet users to know their rights and understand the law) is here.



The Centre for Information Resilience (CIR) CIR is an independent, non-profit social enterprise dedicated to identifying, countering and exposing influence operations. Their Russia-Ukraine Monitor Map is a crowdsourced effort by Centre for Information Resilience and the wider open source community to map, document and verify significant incidents during the conflict in Ukraine.

The Australian Strategic Policy Institute (ASPI)  The Australian Strategic Policy Institute’s International Cyber Policy Centre (ICPC) released their ‘Understanding Global Disinformation and Information Operations’ website and companion paper. This project, led by ASPI ICPC’s Information Operations and Disinformation team, provides a visual breakdown of publicly-available data from state-linked information operations on social media. Data sets from Twitter’s Information Operations Archive were analyzed to see how each state’s willingness, capability and intent has evolved over time. By making these complex data sets available in an accessible form, the project is helping to broaden meaningful engagement on the challenge of state actor information operations and disinformation campaigns for policymakers, civil society and the international research community. Policymakers and researchers can now consistently compare the activity, techniques and narratives across each operation, and compare what states do differently from each other and how their activities change over time.

The RAND Corporation (RAND) The rise of the internet and the advent of social media have fundamentally changed the information ecosystem, giving the public direct access to more information than ever before. But it’s often nearly impossible to distinguish accurate information from low-quality or false content. This means that disinformation—false or intentionally misleading information that aims to achieve an economic or political goal—can become rampant, spreading further and faster online than it ever could in another format.
As part of its Countering Truth Decay Initiative, and with support from the Hewlett Foundation, RAND is responding to this urgent problem. Their researchers identified and characterized the universe of online tools developed by nonprofits and civil society organizations to target online disinformation. These tools were created to help information consumers, researchers, and journalists navigate today’s challenging information environment.
Fueled in part by the spread of disinformation, Truth Decay is the term RAND is using to refer to the diminishing role of facts, data, and analysis in political and civil discourse and the policymaking process. Truth Decay is characterized by four trends: increasing disagreement about facts and data, blurring of the line between opinion and fact, increasing relative volume of opinion compared to fact, and declining trust in institutions that used to be looked to as authoritative sources of factual information. Truth Decay poses a threat to democracy, to policymaking, and to the very notion of civic discourse.
See also RAND’s July 2022 Perspective:
Artificial Intelligence, Deepfakes, and Disinformation. The purpose of this Perspective is to provide policymakers an overview of the deepfake threat. It first reviews the technology undergirding deepfakes and associated artificial intelligence (AI)–driven technologies that provide the foundation for deepfake videos, voice cloning, deepfake images, and generative text. It highlights the threats deepfakes pose, as well as factors that could mitigate such threats. The paper then reviews the ongoing efforts to detect and counter deepfakes and concludes with an overview of recommendations for policymakers. This Perspective is based on a review of published literature on deepfake- and AI-disinformation technologies. Moreover, leading experts in the disinformation field contributed valuable insights that helped shape the work.

The European Association for Viewers Interests (EAVI) The European Association for Viewers Interests is an  international non-profit organization registered in Brussels which advocates media literacy and full citizenship. EAVI supports the adoption of initiatives that enable citizens read, write and participate in public life through the media. They work in Europe (and beyond) to empower individuals to be active, engaged citizens in today’s increasingly challenging media environment. At EAVI they have been collecting resources for teaching and learning, including films and videos, lesson plans, academic studies, articles, infographic, statistics and lists. Their “Our Beyond Fake News” infographic identifies the 10 types of potentially misleading news: 

Media Bias/Fact Check MBFC, founded in 2015, is an independent online media outlet. MBFC is dedicated to educating the public on media bias and deceptive news practices and their aim is to inspire action and a rejection of overtly biased media. They want to return to an era of straightforward news reporting and follow a strict methodology for determining the biases of sources. MBFC also provides occasional fact checks, original articles on media bias, and breaking/important news stories, especially as it relates to USA politics.
A questionable source exhibits one or more of the following: extreme bias, consistent promotion of propaganda/conspiracies, poor or no sourcing to credible information, a complete lack of transparency and/or is fake news. Fake News is the deliberate attempt to publish hoaxes and/or disinformation for the purpose of profit or influence. Sources listed in the Questionable Category may be very untrustworthy and should be fact checked on a per article basis. Please note, sources on this list are not considered fake news unless specifically written in the reasoning section for that source.

Cambridge Strategic Interest Group A new Strategic Interest Group has been formed as part of the Trust & Technology Initiative. The Disinformation and Media Literacy research is being carried out by scholars and practitioners committed to exploring and advancing proactive and creative interventions against disinformation and ‘fake news’. The Disinformation and Media Literary cluster has emerged out of work and outreach conducted by the Cambridge Ukrainian Studies Centre, founded by Dr Finnin, and the Cambridge Social Decision-Making Lab, directed by Dr van der Linden. Drawing together experts across a range of disciplines, it seeks to bring cutting-edge research to bear on contemporary policy discussions and to foster closer collaboration in the sphere of information literacy between scholars, policymakers and stakeholders in journalism, education, business and diplomacy.
An example of the interdisciplinary collaboration has been the work of the Dutch media collective DROG and Jon Roozenbeek in the development of a popular online ‘fake News’ game (‘Bad News’), which seeks to ‘inoculate’ players against online disinformation. The cluster is keen to grow an active network of researchers from across social science, humanities, and technical disciplines.

The United States Agency for Global Media (USAGM) The United States Agency for Global Media is an independent federal agency, overseeing public service media networks that provide unbiased news and information in countries where the press is restricted. USAGM entities include the Voice of America, Radio Free Europe/Radio Liberty, the Office of Cuba Broadcasting (Radio and TV Marti), Radio Free Asia, the Middle East Broadcasting Networks (Alhurra TV and Radio Sawa) and Open Technology Fund. USAGM programming has a measured audience of 354 million in more than 100 countries and in 62 languages. USAGM continued to reach large audiences in countries of importance to U.S. national security and foreign affairs interests, including Russia (7.9 million), China (65.4 million), and Iran (12.2 million). Audience growth also occurred in several key markets – including Turkey (up 287 percent since the agency’s last survey there), Burma (up 132 percent since the last survey), and Vietnam (up 241 percent since the last survey) – as well as from previously unsurveyed markets, including India (29.4 million) and the Philippines (5.0 million).
USAGM’s networks are the Voice of America, Radio Free Europe/Radio Liberty, the Office of Cuba Broadcasting, Radio Free Asia, and the Middle East Broadcasting Networks.
The measured weekly audience for USAGM programming grew by 11 percent to reach an unprecedented 394 million people in fiscal year 2021, according to the agency’s Performance and Accountability Report recently submitted to Congress.

Reboot Foundation Fake news is a problem that threatens the very roots of modern democracy. Yet there’s no easy or simple solution, according to Reboot Foundation, an organization promoting critical thinking skills in young people and parents. There are far too many incentives to spread false information. Nations often promote fake news as part of disinformation campaigns to further political agendas. Profiteers generate fake news material and distribute it widely for advertising revenues. The design of online social media platforms enables — and often encourages — the widespread distribution of fake news. The result is significant amounts of disinformation cutting across the world’s media landscape. Research on the impact of fake news is sparse. Although research on propaganda goes back decades, modern fake news is typically spread through online social networks and comes more clearly under the guise of “news,” making it different from state propaganda. More about fighting fake news can be found here.

Newsguard NewsGuard’s white paper, “Fighting Misinformation with Journalism, not Algorithms,” outlines independent research on the effect of using human-curated news reliability ratings to mitigate false news, some of which has been conducted by leading academic institutions and other top scholars using NewsGuard’s Reliability Ratings dataset. As researchers, media experts, and policy analysts redouble their efforts to assess the impact of misinformation and test effective ways to counter the spread of misinformation online, NewsGuard’s human-powered ratings of all the news and information sources that account for 95% of engagement online has become an essential benchmark. Since NewsGuard’s founding in 2018, researchers at 25 academic, nonprofit, and corporate institutions have licensed NewsGuard’s data for their work. To read the results of their work and those of others, click here.
NewsGuard is a tool that shows you trust ratings for more than 7,000 news and information websites as you browse the internet. It is installed as a browser extension that works within your internet browser as you search for news and information online. NewsGuard ratings are also available as an app on mobile devices and are integrated directly into products such as the Microsoft Edge mobile app through licensing partnerships. For businesses, NewsGuard offer the BrandGuard tool to help advertisers avoid having their advertisements appear on misinformation and hoax sites and the Misinformation Fingerprints to enable artificial intelligence tools to find all examples of a hoax anywhere on the internet. NewsGuard was created by a team of journalists who assess the credibility and transparency of news and information websites based on nine journalistic criteria, such as whether the site repeatedly publishes false content, whether it discloses who owns and finances it, and whether it corrects factual errors when they occur. Based on the nine basic, apolitical criteria, each site gets an overall rating of green (generally trustworthy) or red (generally not trustworthy), a trust score of 0-100 points, and a detailed “Nutrition Label” explaining who is behind the site, why it received its rating and score, and specific examples of any trust issues our team found.
NewsGuard’s Misinformation Monitor of March 2022WarTok: TikTok is feeding war disinformation to new users within minutes — even if they don’t search for Ukraine-related content

First Draft At First Draft, their mission is to protect communities from harmful misinformation. They work to empower society with the knowledge (thinking), understanding (training), and tools (tackling and tracking) needed to outsmart false and misleading information.

London School of Economics With the explosion of the internet and social media, it has become incredibly easy to disseminate unfounded rumors, “fake news” and other questionable information. The authors write (Fighting Epistemic Pollution) that this genuine epistemic pollution, which undermines democracy and the trust necessary for the proper conduct of business, should be taken into account just like attacks on the environment and on the social climate. The notion of corporate social responsibility could thus be completed by introducing a specific notion of epistemic responsibility for companies. 

Aspen Institute On May 5, 2022 The Aspen Tech Policy Hub and the Commission on Information Disorder are thrilled to announce that Alterea, Inc. has been awarded the Information Disorder $75,000 prize! The team will use their prize to execute “Agents of Influence” to make meaningful progress toward ending information disorder. The team members are Anahita Dalmia, Jasper McEvoy, and Alex Walter. “Agents of Influence” is a video game that teaches middle- and high-schoolers to recognize misinformation, think critically, and make more responsible decisions. Through interactive narrative and games that teach counter-misinformation best practices, students save the fictional Virginia Hall High School from the plots of Harbinger, an evil spy organization using misinformation to manipulate the student body. 
The award is the culmination of our Information Disorder Prize Competition. The prize competition, launched in November 2021, asked applicants for unique and innovative projects aimed at combating mis- and disinformation in direct connection to one or more of the newly announced 15 recommendations of the Aspen Institute’s Commission on Information Disorder. The competition concluded at our live pitch event on May 3, which also featured finalists from our semi-finalist teams. You can view a recording of the final event here:

The semi-finalist teams to present were:

Alterea, Inc.: “Agents of Influence,” a spy-themed media literacy video game for middle- and high-school students. Team members: Anahita Dalmia, Jasper McEvoy, Alex Walter

RadiTube: “RadiTube: Narrative Detection and Analysis for YouTube,” an automatic detection and tracing tool for harmful narratives spread through YouTube videos. Team members: Cameron Ballard, Bernhard Lenger, Erik Van Zummeren

Ranking Digital Rights at New America: “Treating Information Disorder by Making Online Ads Accountable,” a set of sample policies toward making the online ad economy more transparent and accountable. Team members: Nathalie Maréchal, Anna Lee Nabors, Zak Rogoff, Alex Rochefort

Observatory on Social Media at Indiana University: “Detection Tool for Misinformation Superspreaders,” an open-source public tool to identify misinformation superspreaders. Team members: Filippo Menczer, Rachith Aiyappa, John Bryden

On the 18th of August, 2022, the Aspen Tech Policy Hub announced the launch of their fifth Policy 101, a briefing on mis- and disinformation. This Policy 101 provides an overview of what makes something mis- versus disinformation, why the spread of such information matters, and what policymakers and platforms are doing to combat its negative effects. This brief is the latest installment of our ongoing ‘Policy 101’ series, which is written by our staff and alums and aims to give quick, to-the-point overviews of key tech policy issues of the day. Check out the series here.