Fake news is a huge problem in the modern digital age with disinformation and misinformation spreading across social media and private chat groups

In our hyper-connected world, information is created, collected, and streamed faster than ever before. Corporations, governments, and online platforms can collect data on its citizens and customers with ease, thanks to the rise of forced mobile app use and internet-of-things (IoT) proliferation. Bulk data collection enables powerful data analytics opportunities which can provide a wealth of data points for advertising targeting and audience segmentation. A side effect of massive data generation and analytics is an ability to create and spread misinformation, disinformation, and conduct cyber information warfare at an unprecedented level.

Unfortunately, not all information generated online is accurate; some can be deliberately weaponized for political gain. Understanding the distinctions between misinformation and disinformation is crucial in navigating the often murky waters of online political discourse. You’re not alone if you’ve wondered what misinformation or disinformation is – or how they’re different.

Before we dive deep into each topic, it’s vital to understand that misinformation and disinformation can come in many forms and from many sources. Not all disinformation originates from countries such as Russia, military units, or malicious actors with deep financial pockets.

Misinformation, like disinformation, can originate within a country’s own borders, targeting a particular demographic or community within the same country – that thinks the same way they do (i.e., hivemind groupthink, or mob mentality). Left unchallenged by the group, the misinformation spreads virally and creates a false panic, or false understanding of a (usually) controversial topic.

What is Misinformation?

Misinformation is false or inaccurate information that is shared, often unwittingly. It could be a news article with factual errors, a misinterpreted statistic, or even a well-meaning but inaccurate post on social media. While the intent behind misinformation may not be malicious, its cumulative effect can be significant, sowing confusion and eroding trust in reliable sources.

Misinformation includes purposefully manipulated (e.g., genuine information that has been distorted as “clickbait”) or misleading content (e.g., presenting opinion as fact) that proliferates across online blogs or media.

Fake news most closely aligns with misinformation rather than disinformation, as it presents false or distorted information as news. Fake news usually lacks verifiable or unbiased sources, accurate facts, or quotes.

Misinformation Example: Vitamin C and COVID-19

Throughout the past several years, we’ve witnessed misinformation spread as theories about COVID-19, government officials, and global governments have spread false and misleading information.

In April 2020, social media posts claiming that Vitamin C could “cure COVID-19” were spreading across Facebook groups, Twitter, and TikTok as many sought treatment for the virus. In the context at the time, the United States was approximately a month into lockdowns, and no vaccines yet existed. However, the National Institute of Health (NIH), Centers for Disease Control (CDC), and World Health Organization (WHO) all confirmed that while Vitamin C may help boost immune systems, it has no scientific proof to “cure” or rid a body of a COVID-19 infection.

While these health organizations set the record straight, it’s unknown how many individuals still chose to believe that vitamin C could (wrongly) cure COVID-19. Suppose someone unnecessarily put themselves in harm’s way of catching COVID-19, believing that vitamin C would cure them. In that case, the misinformation has fundamentally changed someone’s behavior and pattern of life decisions.

And that is how misinformation can lead to unfortunate outcomes.

The Center for Security and Emerging Technology (CSET) at Georgetown’s Walsh School of Foreign Service hosted an informative AI, Elections, and Disinformation webinar. (source: YouTube)

What is Disinformation?

This is where things take a darker turn. Disinformation goes beyond mere inaccuracy; it’s the deliberate creation and dissemination of false or misleading information to achieve a specific objective. This could involve fabricating stories, manipulating images or videos, or creating fake social media accounts to spread propaganda (usually at scale across a targeted demographic or region). The aim is to sway public opinion, sow discord, and ultimately, manipulate outcomes.

Disinformation is most commonly used to achieve political objectives, distribute propaganda, and enforce authoritarian government agendas.

Cyber disinformation is considered a form of cyber information warfare in a military or government context. According to the United Nations, selective, repetitive, and frequent exposure to disinformation and fake news helps shape, reinforce, and confirm what is being communicated as valid for information warfare.

Disinformation in the Ukraine and Russia War

Countries such as Russia have used disinformation for decades to sway public sentiment and gain support, most recently with the ongoing war between Russia and Ukraine. The U.S. Department of State released a report in 2020 outlining five pillars of Russia’s disinformation and propaganda ecosystem. The five pillars include:

  1. Official Russian government communications
  2. State-funded global messaging
  3. Cultivation of proxy sources
  4. Weaponization of social media
  5. Cyber-enabled disinformation

While the report was released before the start of the Ukraine-Russia War, its tactics, techniques, and procedures align with what experts have observed from Russia since.

Early in the war, footage from a video game was falsely cited as video of real attacks between Russia and Ukraine that had millions of views. A TikTok video posted by a Russian-operated TikTok account went viral after it falsely inferred Russian paratroopers were invading parts of Ukraine while other invasions were underway. The footage was years old, but it had over 22 million views and went viral across other social media platforms, according to the Associated Press.

“We see a paratrooper, he’s speaking Russian, and so we don’t take the time to question it. If we see a piece of information that’s new to us, we have this compulsion to share it with others.”

John Silva, Senior Director of News Literacy Project

A February 2023 report by the Threat Analysis Group (TAG) of Google outlined the extensive cyberattack and disinformation campaigns conducted by the Russians against Ukraine. The report revealed extensive multi-pronged cyberattack structure by Kremlin-tied hackers for cyber espionage, disruption, and sustaining footholds in prominent Ukrainian military and government targets.

Russian hacking and election interference in the United States

In 2018, a 60 Minutes investigative report of Russian disinformation and election interference targeting the 2016 Presidential Elections in the United States outlined the ease and impact Russian hackers can have. The report described defending Russian cyberattacks as “bows and arrows against the lightning,” due to the relentless and orchestrated nature of Russian cyberattacks – no match for U.S. state IT and security teams.

Steve Sandvoss, Executive Director of the State of Illinois Board of Elections, showed 60 Minutes how Russian hackers had penetrated the state’s servers undetected for 3 weeks that hosted Illinois voter information for 7.5 million citizens. The hackers exfiltratred 90,000 partial voter records, and 3,500 complete voter records without resistance. The hackers then bombarded the servers with a denial of service attack, in what Sandoss and the FBI attribute as an attempt to sow discord and distrust in the integrity of United States elections.

A 60 Minutes investigative report from 2018 on Russian disinformation and hacking as information and cyber warfare against the 2016 Presidential Elections in the United States. (source: YouTube)

How is social media and cyber disinformation weaponized?

Digital media provides fertile ground for weaponizing misinformation and disinformation. Viral content is king, algorithms often prioritize engagement over accuracy, and anonymity online emboldens bad actors. Advancements in AI and generative AI have made it easier than ever for disinformation to be created and spread at scale.

  • Fake news websites and social media bots: These platforms churn out fabricated content and amplify it through coordinated campaigns, creating the illusion of legitimacy and widespread support.
  • Deepfakes and manipulated media: Sophisticated editing techniques can be used to create fake videos or audio recordings of politicians saying or doing things they never did, making them extremely difficult to debunk. We’ve already witnessed deepfake damage impacting U.S. voter participation in the 2024 primaries and teens creating deepfake porn of their female classmates.
  • Echo chambers and filter bubbles: Algorithms can trap users in online communities that only expose them to information confirming their biases (known as confirmation bias), making them more susceptible to believing and spreading misinformation.
  • Targeted campaigns: Disinformation campaigns can be tailored to specific demographics or locations, exploiting existing anxieties and divisions to maximize their impact. It can also be used for Gerrymandering.

How to detect misinformation or disinformation online

Misinformation and disinformation can become incredibly convincing when executed by well-financed entities with extensive resources. Left to operate and flourish in private corners of the internet (e.g., Slack channels, Facebook private groups, Discord), a viral story can quickly gain support and spread to major media unless enough fact-checking and source verification occurs.

The fight against weaponized online lies requires a multi-pronged approach:

  • Media literacy education: Equipping individuals with the skills to identify, evaluate, and verify information online is crucial in building a more resilient information ecosystem.
  • Fact-checking and journalistic integrity: Supporting credible news organizations and fact-checking initiatives is essential in debunking false narratives and holding bad actors accountable.
  • Algorithmic transparency and accountability: Holding social media platforms accountable for the content they host and the algorithms that shape user experiences is vital in curbing the spread of disinformation.

You don’t need to be a journalist or author to benefit from fact-checking skills. We recommend this 10-tip whitepaper for fighting fake news and misinformation from LexisNexis. Cybersecurity expert Joe Carrigan, co-host of the Hacking Humans podcast, also provides helpful tips on a John Hopkins University blog.

In the digital age, critical thinking and a healthy dose of skepticism are our most powerful weapons against weaponized lies. If a story sounds too extreme or hard to believe, it probably is. If you can’t find reputable, credible sources, it’s almost certainly misinformation or disinformation.

Leave a Reply

You May Also Like

What is a GovCloud?

Are you trying to use the cloud with the United States Federal…

Why hiring for Cybersecurity is difficult: they can’t afford the talent

The next time you apply to a cybersecurity job and don’t get…

Crowdstrike introduces Charlotte AI, a Generative AI Cybersecurity Analyst

CrowdStrike, a leading cybersecurity company, has announced the launch of Charlotte AI,…