Digital & Insights

Combatting the Rise of Disinformation from New AI Technologies

What exactly is an AI Chatbot? It is a computer program that uses artificial intelligence to simulate conversation with human users, typically through a messaging interface. AI chatbots are powered by natural language processing (NLP) algorithms, which allow them to understand and interpret human language inputs and respond appropriately[1].

As you can see below, that definition was given from a Generative AI Chatbot, ChatGPT, to describe itself.

While far from perfect, the power of these tools is set to create massive changes in almost every industry by reducing time – and sometimes skills – required to do tasks. Though many herald these tools as a considerable boon for productivity[3], these tools and their underlying technologies can present considerable risks to businesses.

Disinformation at the Speed of Generative AI

Chatbots, and their underlying generative AI technology, pose a risk to businesses and other entities by enabling the misrepresentation of businesses, individuals, technologies or products through either passive misinformation or more active disinformation.

  • Passive Misinformation – The generative AI technology that powers chatbots categorizes a multitude of sources at scale, selects the most “valuable” information and then presents an “authoritative” summary or recommendation around the question users submit. If the AI selects incorrect information in response to a prompt – a user may become misinformed as a result, thinking he or she has the definitive answer, when the truth is anything but definitive (see here). Similar to how a high-ranking site with incorrect information can lead a user’s search journey away from a business’s owned content, an AI chatbot with incorrect information can misinform the viewpoint of a valuable stakeholder.
  • Active Disinformation – The ability of generative AI technologies to automate tasks such as developing social media or advertising copy, has created efficiencies for smaller and larger businesses alike. However, in the hands of bad actors, the ability of generative AI technologies to automate those tasks means that “disinformation at scale” is no longer in the realm of the theoretical: what used to take a “troll farm” or coordinated negative campaign to do can be done with smart prompts in an AI chatbot to churn out copy and code.

Mitigation Strategies When Faced with Generative AI Misinformation

While each of the threats noted above require nuanced strategies for mitigation, a high-level approach to protecting against such threats include:

  • Digital Conversation Analysis and Reporting – Regular conversation analysis and reporting allows for the early detection of new narratives or jumps in volume of mentions about your company that can indicate a potential attack. It is also likely that tools will be introduced to meet this demand, like those already provided for academia to spot AI-generated essays (such as this one).
  • Intelligence and Rapid Response – Conversation analyses can help set benchmarks to augment a rapid response program focused on countering disinformation and misinformation. Identifying when conversation volumes around key topics change rapidly can help communications activate rapid response protocols designed to counter that misinformation.
  • Scenario and Narrative Planning – Anticipating likely narratives that might proliferate, informed by an analysis of conversations about the company on both the clear and dark web, enables communicators to craft a strategic response in a timely manner.
  • Digital Channel Development – Robust owned channels, such as social profiles and websites, enable rapid response through countering misinformation. Having an engaged audience that knows where to expect official information from allows companies to correct the record quickly and visibly while conducting direct outreach to high-value stakeholders.
  • Third-Party Company Advocacy – Third-party supporter cultivation helps both in broadcasting a response more broadly and lends credibility to refuting and mitigating prevailing negative narratives.

The Future of Generative AI and Disinformation

The companies and developers behind generative AI and chatbots like ChatGPT are aware of these potential use cases. They are no doubt working hard to keep individuals and companies safe from disinformation and other forms of misuse of AI (see here).

However, threats are likely to continue the spread of disinformation as a tactic to damage companies’ reputations with stakeholders like investors, media and others. To that end, look out for more reporting from FTI that dives into some specific threats that companies should look out for and plan to mitigate.

[1] https://chat.openai.com/chat

[2] https://chat.openai.com/chat

[3] Shakked Noy and Whitney Zhang, “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence,” MIT Working Paper (not peer reviewed) (March 2, 2023), https://economics.mit.edu/sites/default/files/inline-files/Noy_Zhang_1.pdf.

The views expressed in this article are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.

©2023 FTI Consulting, Inc. All rights reserved. www.fticonsulting.com

Related Articles

A Year of Elections in Latin America: Navigating Political Cycles, Seizing Long-term Opportunity

January 23, 2024—Around 4.2 billion people will go to the polls in 2024, in what many are calling the biggest electoral year in history.[...

FTI Consulting Appoints Renowned Cybersecurity Communications Expert Brett Callow to Cybersecurity & Data Privacy Communications Practice

July 16, 2024—Callow to Serve as Managing Director, Bolstering FTI Consulting’s Cybersecurity & Data Privacy Communications Prac...

Navigating the Summer Swing: Capitalizing on the August Congressional Recess

July 15, 2024—Since the 1990s, federal lawmakers have leveraged nearly every August to head back to their districts and reconnect with...

Protected: Walking the Tightrope: Navigating Societal Issues on Social Media 

July 13, 2024—There is no excerpt because this is a protected post.