Understanding Information Disorder
We assist companies and individuals dealing/struggling with disinformation, misinformation and malinformation campaigns, causing reputational and financial harm. We investigate (using OSINT, Social Media, and Deep Web/Dark Net research), report our findings and recommendations, and we provide strategic advice.
But what is disinformation, misinformation and malinformation?
- Disinformation: Fabricated or deliberately manipulated audio/visual content. Intentionally created conspiracy theories or rumors. It is motivated by three distinct factors: to make money; to have political influence, either foreign or domestic; or to cause trouble for the sake of it. When disinformation is shared it often turns into misinformation.
- Misinformation: Unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or when satire is taken seriously. The person sharing doesn’t realize that it is false or misleading. Often a piece of disinformation is picked up by someone who doesn’t realise it’s false, and shares it with their networks, believing that they are helping. The sharing of misinformation is driven by socio- psychological factors. Online, people perform their identities. They want to feel connected to their “tribe”, whether that means members of the same political party, parents that don’t vaccinate their children, activists who are concerned about climate change, or those who belong to a certain religion, race or ethnic group.
- Malinformation: Deliberate publication of private information for personal or corporate rather than public interest, such as revenge porn. Deliberate change of context, date or time of genuine content with an intent to cause harm.
Please find below a Reading List, Articles and Reports (2023), as well as some Resources that can help you to combat the challenge of disinformation, misinformation, and malinformation, to gain greater awareness of the media ecosystem, and to become a savvy information media consumer.
The phenomenon has undermined our trust in electoral systems, in vaccines, in what’s happening in Ukraine, in what happened at the U.S. Capitol on January 6, 2021, etc. Next to the Verification Handbook For Disinformation And Media Manipulation, equipping journalists with the knowledge to investigate social media accounts, bots, private messaging apps, information operations, deep fakes, as well as other forms of disinformation and media manipulation, please find below 25 books on its history, techniques and effects:
- Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity / By Sander van der Linden (2023)
- Cheap Speech: How Disinformation Poisons Our Politics — and How to Cure It / By Richard L. Hasen (2022)
- The Revenge of Power: How Autocrats Are Reinventing Politics for the 21st Century / By Moisés Naím (2022)
- This Is Not Propaganda: Adventures in the War Against Reality / By Peter Pomerantsev (2020)
- The Reality Game: How the Next Wave of Technology Will Break the Truth / By Samuel Woolley (2020)
- I Am a Troll: Inside the Secret World of the BJP’s Digital Army / By Swati Chaturvedi (2020)
- Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives / By Philip N Howard (2020)
- Active Measures: The Secret History of Disinformation and Political Warfare / By Thomas Rid (2020)
- Disinformation, Misinformation and Fake News in Social Media: Emerging Research Challenges and Opportunities / Edited by Kai Shu, Suhang Wang, Dongwon Lee and Huan Liu (2020)
- Striking Back: Overt and Covert Options to Combat Russian Disinformation / By Thomas Kent (2020)
- Fake News: Understanding Media and Misinformation in the Digital Age / Edited by Melissa Zimdars and Kembrew McLeod (2020)
- Ten Arguments for Deleting Your Social Media Accounts / By Jaron Lanier (2019)
- The Misinformation Age: How False Beliefs Spread / By Cailin O’Connor and James Owen Weatherall (2019)
- Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It / By Richard Stengel (2019)
- LikeWar: The Weaponisation of Social Media / By PW Singer and Emerson T Brooking (2019)
- Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics / By Yochai Benkler, Robert Faris and Hal Roberts (2018)
- Why Learn History (When It’s Already on Your Phone) / By Sam Wineburg (2018)
- Post-Truth / By Lee McIntyre (2017)
- Weaponized Lies: How to Think Critically in the Post-Truth Era / By Daniel J. Levitin (2017)
- Too Big To Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room / By David Weinberger (2014)
- Trust Me, I’m Lying: Confessions of a Media Manipulator / By Ryan Holiday (2012)
- The Filter Bubble: How the Personalized Web Is Changing What We Read and How We Think/ By Eli Pariser (2012)
- True Enough: Learning to Live in a Post-Fact Society / By Farhad Manjoo (2008)
- unSpun: Finding Facts in a World of Disinformation / By Brooks Jackson and Kathleen Hall Jamieson (2007)
- How to Lie With Statistics / By Darrell Huff (1993)
ARTICLES AND REPORTS (2023)
USAGM FY24 budget responds to growing threats to information (March 2023)
The President released his Fiscal Year 2024 Budget Request for the U.S. Agency for Global Media (USAGM), outlining $944.0 million in resources to fund critical international broadcasting efforts. This is an increase of $59.3 million or 6.7 percent above the FY 2023 Enacted level. “This new budget was carefully formulated with a global, long-term vision, and in response to growing threats to the freedom of information,” said USAGM CEO Amanda Bennett. “USAGM remains committed to combatting propaganda and information manipulation, and this funding level supports investments that are critical to modernizing USAGM operations and remaining competitive in markets where we are most needed.”
Briefing: EU sanctions on Russia: Overview, impact, challenges (March 2023)
Excerpt: Sanctions, disinformation and divergent global perceptions: One of the ways the Kremlin has fought international sanctions has been to mobilise its disinformation ecosystem, including in diplomatic fora, seeking to convince global audiences that international sanctions imposed on Russia are to blame for the surge in food and fuel prices, besides being unfair. Such messages have been circulated in multiple languages, in Europe, its neighbourhood and beyond, and have been echoed by African and Chinese media, boosting their global reach. Kremlin disinformation has also been spread in relation to the negotiations on the Black Sea Grain Initiative, with false allegations (such as that Ukraine is transporting grain to the EU, to pay the West for weapons supplies), a claim debunked by Ukrainian fact-checkers. The EU and its allies have activated diplomatic and information channels to clarify that international sanctions imposed on Russia target the Kremlin’s ability to finance its military aggression, which remains the main cause of the looming food crisis, and do not affect agricultural products. A number of experts, however, point out that the secondary effect of massive sanctions on global supply chains is still negatively affecting food security in vulnerable countries, and EU-Africa relations are being impacted by the spill-over effects of Russia’s invasion of Ukraine. After over one year, and despite the resounding condemnation of Russia’s aggression in several United Nations resolutions, the war has reshaped global geopolitics, with a growing rift between the Global South and developed economies; albeit fuelled by targeted disinformation, the rift is also grounded in genuinely divergent perceptions of the stakes and priorities ahead.
The Disinformation Game: Finding New Ways To Fight ‘Fake News’ (March 2023)
CREST (the Centre for Research and Evidence or Security Threats), is the UK’s hub for behavioral and social science research into security threats. According to CREST, innovating new effective ways to tackle false information in media has never been as important, with ‘fake news’ being disseminated globally online at a rate never seen before. CREST Security Review brings together articles that shine a behavioral and social science lens on innovation and security threats. One of the articles is about the Disinformation Game: Finding New Ways To Fight ‘Fake News”, from a postgraduate researcher within the Privacy, Security and Trust research group at the University of East Anglia in Norwich. See also the Disinformation page on the CREST website.
The Office of the Director of National Intelligence’s 2023 Annual Threat Assessment of the Intelligence Community (February 2023)
Excerpt: Trends in Digital Authoritarianism and Malign Influence: Globally, foreign states’ malicious use of digital information and communication technologies will become morepervasive, automated, targeted, and complex during the next few years, further threatening to distort publicly available information and probably will outpace efforts to protect digital freedoms. The exploitation of U.S. citizens’ sensitive data and illegitimate use of technology, including commercial spyware and surveillance technology, probably will continue to threaten U.S. interests. Authoritarian governments usually are the principal offenders of digital repression, but some democratic states have engaged in similar approaches, contributing to democratic backsliding and erosion. Many foreign governments have become adept at the tools of digital repression, employing censorship, misinformation and disinformation, mass surveillance, and invasive spyware to suppress freedom. During the next several years, governments are likely to grow more sophisticated in their use of existing technologies, and learn quickly how to exploit new and more intrusive technologies for repression, particularly automated surveillance and identity resolution techniques.
First EEAS Report on Foreign Information Manipulation and Interference Threats (February 2023)
The first edition of the EEAS report on Foreign Information Manipulation and Interference (FIMI) threats, published on February 7, 2023, is informed by the work of the European External Action Service’s (EEAS) Stratcom division in 2022. Based on a first sample of specific FIMI cases, it outlines how building on shared taxonomies and standards can fuel our collective understanding of the threat and help inform appropriate countermeasures in the short to the long term.
From Fake News to Fake Views: New Challenges Posed by ChatGPT-Like AI (January 2023)
From Fake News to Fake Views: New Challenges Posed by ChatGPT-Like AI“, a January 20, 2023 article in Lawfare. The authors considered the likelihood of generative artificial intelligence – like ChatGPT – to amplify harmful content online, and introduced avenues to mitigate this risk. The authors also discussed the inherent limitations of neural networks, distinguishing human- from machine-generated content, the role of social media platforms and more.
Please contact us for articles and reports published in 2018, 2019, 2020, 2021, and 2022 (such as: “Russian propaganda finds a home in Italian media”, “Mykhailo Fedorov, Ukraine’s Vice Prime Minister and Minister of Digital Transformation”, “Drowning in disinformation: how homegrown state-sponsored disinformation threatens EU democracy”, “Russia’s ‘Ministry of Truth'”, “Ukraine uses AI to identify dead Russian soldiers”, “Ukraine at 360/OS: how Russian disinformation is fueling the war”, “Russian Propaganda: words no longer matter”“, “The EU Special Committee of Foreign Interference”, “The (shady) reputation management industry”)
Alliance for Securing Democracy at the German Marshall Fund of the United States The Hamilton 2.0 dashboard, a project of the Alliance for Securing Democracy at the German Marshall Fund of the United States, provides a summary analysis of the narratives and topics promoted by Russian, Chinese, and Iranian government officials and state-funded media on Twitter, YouTube, state-sponsored news websites, and via official diplomatic statements at the United Nations. (NOTE — there currently are no UN statements or YouTube data for Iran). The dashboard also includes a small collection of global media Twitter accounts for comparative purposes.
The Centre for Information Resilience (CIR) CIR is an independent, non-profit social enterprise dedicated to identifying, countering and exposing influence operations. Their Russia-Ukraine Monitor Map is a crowdsourced effort by Centre for Information Resilience and the wider open source community to map, document and verify significant incidents during the conflict in Ukraine.
The Australian Strategic Policy Institute (ASPI) The Australian Strategic Policy Institute’s International Cyber Policy Centre (ICPC) released their ‘Understanding Global Disinformation and Information Operations’ website and companion paper. This project, led by ASPI ICPC’s Information Operations and Disinformation team, provides a visual breakdown of publicly-available data from state-linked information operations on social media. Data sets from Twitter’s Information Operations Archive were analyzed to see how each state’s willingness, capability and intent has evolved over time. By making these complex data sets available in an accessible form, the project is helping to broaden meaningful engagement on the challenge of state actor information operations and disinformation campaigns for policymakers, civil society and the international research community. Policymakers and researchers can now consistently compare the activity, techniques and narratives across each operation, and compare what states do differently from each other and how their activities change over time.
The RAND Corporation (RAND) The rise of the internet and the advent of social media have fundamentally changed the information ecosystem, giving the public direct access to more information than ever before. But it’s often nearly impossible to distinguish accurate information from low-quality or false content. This means that disinformation—false or intentionally misleading information that aims to achieve an economic or political goal—can become rampant, spreading further and faster online than it ever could in another format.
As part of its Countering Truth Decay Initiative, and with support from the Hewlett Foundation, RAND is responding to this urgent problem. Their researchers identified and characterized the universe of online tools developed by nonprofits and civil society organizations to target online disinformation. These tools were created to help information consumers, researchers, and journalists navigate today’s challenging information environment.
Fueled in part by the spread of disinformation, Truth Decay is the term RAND is using to refer to the diminishing role of facts, data, and analysis in political and civil discourse and the policymaking process. Truth Decay is characterized by four trends: increasing disagreement about facts and data, blurring of the line between opinion and fact, increasing relative volume of opinion compared to fact, and declining trust in institutions that used to be looked to as authoritative sources of factual information. Truth Decay poses a threat to democracy, to policymaking, and to the very notion of civic discourse.
See also RAND’s July 2022 Perspective: Artificial Intelligence, Deepfakes, and Disinformation. The purpose of this Perspective is to provide policymakers an overview of the deepfake threat. It first reviews the technology undergirding deepfakes and associated artificial intelligence (AI)–driven technologies that provide the foundation for deepfake videos, voice cloning, deepfake images, and generative text. It highlights the threats deepfakes pose, as well as factors that could mitigate such threats. The paper then reviews the ongoing efforts to detect and counter deepfakes and concludes with an overview of recommendations for policymakers. This Perspective is based on a review of published literature on deepfake- and AI-disinformation technologies. Moreover, leading experts in the disinformation field contributed valuable insights that helped shape the work.
The European Association for Viewers Interests (EAVI) The European Association for Viewers Interests is an international non-profit organization registered in Brussels which advocates media literacy and full citizenship. EAVI supports the adoption of initiatives that enable citizens read, write and participate in public life through the media. They work in Europe (and beyond) to empower individuals to be active, engaged citizens in today’s increasingly challenging media environment. At EAVI they have been collecting resources for teaching and learning, including films and videos, lesson plans, academic studies, articles, infographic, statistics and lists. Their “Our Beyond Fake News” infographic identifies the 10 types of potentially misleading news:
Media Bias/Fact Check MBFC, founded in 2015, is an independent online media outlet. MBFC is dedicated to educating the public on media bias and deceptive news practices and their aim is to inspire action and a rejection of overtly biased media. They want to return to an era of straightforward news reporting and follow a strict methodology for determining the biases of sources. MBFC also provides occasional fact checks, original articles on media bias, and breaking/important news stories, especially as it relates to USA politics.
A questionable source exhibits one or more of the following: extreme bias, consistent promotion of propaganda/conspiracies, poor or no sourcing to credible information, a complete lack of transparency and/or is fake news. Fake News is the deliberate attempt to publish hoaxes and/or disinformation for the purpose of profit or influence. Sources listed in the Questionable Category may be very untrustworthy and should be fact checked on a per article basis. Please note, sources on this list are not considered fake news unless specifically written in the reasoning section for that source.
Cambridge Strategic Interest Group A new Strategic Interest Group has been formed as part of the Trust & Technology Initiative. The Disinformation and Media Literacy research is being carried out by scholars and practitioners committed to exploring and advancing proactive and creative interventions against disinformation and ‘fake news’. The Disinformation and Media Literary cluster has emerged out of work and outreach conducted by the Cambridge Ukrainian Studies Centre, founded by Dr Finnin, and the Cambridge Social Decision-Making Lab, directed by Dr van der Linden. Drawing together experts across a range of disciplines, it seeks to bring cutting-edge research to bear on contemporary policy discussions and to foster closer collaboration in the sphere of information literacy between scholars, policymakers and stakeholders in journalism, education, business and diplomacy.
An example of the interdisciplinary collaboration has been the work of the Dutch media collective DROG and Jon Roozenbeek in the development of a popular online ‘fake News’ game (‘Bad News’), which seeks to ‘inoculate’ players against online disinformation. The cluster is keen to grow an active network of researchers from across social science, humanities, and technical disciplines.
The United States Agency for Global Media (USAGM) The United States Agency for Global Media is an independent federal agency, overseeing public service media networks that provide unbiased news and information in countries where the press is restricted. USAGM entities include the Voice of America, Radio Free Europe/Radio Liberty, the Office of Cuba Broadcasting (Radio and TV Marti), Radio Free Asia, the Middle East Broadcasting Networks (Alhurra TV and Radio Sawa) and Open Technology Fund. USAGM programming has a measured audience of 354 million in more than 100 countries and in 62 languages. USAGM continued to reach large audiences in countries of importance to U.S. national security and foreign affairs interests, including Russia (7.9 million), China (65.4 million), and Iran (12.2 million). Audience growth also occurred in several key markets – including Turkey (up 287 percent since the agency’s last survey there), Burma (up 132 percent since the last survey), and Vietnam (up 241 percent since the last survey) – as well as from previously unsurveyed markets, including India (29.4 million) and the Philippines (5.0 million).
USAGM’s networks are the Voice of America, Radio Free Europe/Radio Liberty, the Office of Cuba Broadcasting, Radio Free Asia, and the Middle East Broadcasting Networks.
The measured weekly audience for USAGM programming grew by 11 percent to reach an unprecedented 394 million people in fiscal year 2021, according to the agency’s Performance and Accountability Report recently submitted to Congress.
Reboot Foundation Fake news is a problem that threatens the very roots of modern democracy. Yet there’s no easy or simple solution, according to Reboot Foundation, an organization promoting critical thinking skills in young people and parents. There are far too many incentives to spread false information. Nations often promote fake news as part of disinformation campaigns to further political agendas. Profiteers generate fake news material and distribute it widely for advertising revenues. The design of online social media platforms enables — and often encourages — the widespread distribution of fake news. The result is significant amounts of disinformation cutting across the world’s media landscape. Research on the impact of fake news is sparse. Although research on propaganda goes back decades, modern fake news is typically spread through online social networks and comes more clearly under the guise of “news,” making it different from state propaganda. More about fighting fake news can be found here.
Newsguard NewsGuard’s white paper, “Fighting Misinformation with Journalism, not Algorithms,” outlines independent research on the effect of using human-curated news reliability ratings to mitigate false news, some of which has been conducted by leading academic institutions and other top scholars using NewsGuard’s Reliability Ratings dataset. As researchers, media experts, and policy analysts redouble their efforts to assess the impact of misinformation and test effective ways to counter the spread of misinformation online, NewsGuard’s human-powered ratings of all the news and information sources that account for 95% of engagement online has become an essential benchmark. Since NewsGuard’s founding in 2018, researchers at 25 academic, nonprofit, and corporate institutions have licensed NewsGuard’s data for their work. To read the results of their work and those of others, click here.
NewsGuard is a tool that shows you trust ratings for more than 7,000 news and information websites as you browse the internet. It is installed as a browser extension that works within your internet browser as you search for news and information online. NewsGuard ratings are also available as an app on mobile devices and are integrated directly into products such as the Microsoft Edge mobile app through licensing partnerships. For businesses, NewsGuard offer the BrandGuard tool to help advertisers avoid having their advertisements appear on misinformation and hoax sites and the Misinformation Fingerprints to enable artificial intelligence tools to find all examples of a hoax anywhere on the internet. NewsGuard was created by a team of journalists who assess the credibility and transparency of news and information websites based on nine journalistic criteria, such as whether the site repeatedly publishes false content, whether it discloses who owns and finances it, and whether it corrects factual errors when they occur. Based on the nine basic, apolitical criteria, each site gets an overall rating of green (generally trustworthy) or red (generally not trustworthy), a trust score of 0-100 points, and a detailed “Nutrition Label” explaining who is behind the site, why it received its rating and score, and specific examples of any trust issues our team found.
NewsGuard’s Misinformation Monitor of March 2022: WarTok: TikTok is feeding war disinformation to new users within minutes — even if they don’t search for Ukraine-related content
First Draft At First Draft, their mission is to protect communities from harmful misinformation. They work to empower society with the knowledge (thinking), understanding (training), and tools (tackling and tracking) needed to outsmart false and misleading information.
London School of Economics With the explosion of the internet and social media, it has become incredibly easy to disseminate unfounded rumors, “fake news” and other questionable information. The authors write (Fighting Epistemic Pollution) that this genuine epistemic pollution, which undermines democracy and the trust necessary for the proper conduct of business, should be taken into account just like attacks on the environment and on the social climate. The notion of corporate social responsibility could thus be completed by introducing a specific notion of epistemic responsibility for companies.