October 17, 2022
The Threat of AI-Powered Disinformation
Not too long ago I shared my thoughts on the use of artificial intelligence to create images.
More specifically, I pondered about Open AI’s decision to allow users to upload and edit faces, and reviewing all the ways it could backfire on the company. Today though, we will go in a slightly different direction, and talk about how AI can be used to create disinformation.
In the recent Verdict article I stated that ‘All AI art generators have been advancing quickly, outpacing the ability of the AI business and tech industry to establish the rules, as well as avert harmful results.’
As of now, moderation in most AI image generators is enforced by featuring a word filter based on classifications and context, plus telling users not to upload human faces without the consent of their owners. While the safeguards are designed to filter out explicit sexual and violent content, censorship tools may struggle to identify misinformation fed to its algorithms.
With Microsoft bringing AI art generation to Office and Bing, it is going to make the risk bigger, simply because huge platforms that are used extensively by millions of people will get more ‘cracks’ in the filters, both inexplicably and intentionally. And I can’t state it enough… Covert actions taken with the intention of disseminating false or misleading information have historically been a unique human trait.
Malicious actors have taken use of the machine learning technologies to accurately target audiences, sway public opinion around the world, and create societal unrest. As an example of the potent threat of AI, let’s remember how President Zelenskyy was depicted telling his soldiers to “surrender” and “give up their arms”.
No one was fooled by such an obvious deepfake, but AI capabilities continue to improve, and they will become increasingly effective tools for manipulating and fooling humans. The more skilled threat actors already have access to an expanding range of persuading, customized, and hard-to-detect message capabilities.
Sure, machine learning techniques can be used to fight disinformation too, but they probably won't be enough to balance off the growing number of digital mercenaries. And while we don’t need to hide from the imminent cyborg apocalypse right now, we certainly do need to prepare and strategize for a new era of AI-enabled disinformation.
Because unless we develop counter-disinformation strategies, AI-enhanced disinformation operations will further exacerbate political polarization, erode citizen trust in institutions, and blur the lines between truth and lies…If you would like to know more about how technology can alter the political climate, be sure to check out my article “Information Operations and Synthetic Media Versus the 2020 US Elections”.