August 28, 2020
Information Operations and Synthetic Media Versus the 2020 US Elections
At the beginning of the month there were a lot of “Black Hat” virtual presentations, many of which concentrated on the security of US presidential elections coming this November.
Apart from the scaling problem of in-mail voting and the logistic challenges brought in by the existing COVID-19 situation, there are real concerns about foreign interference.
The US Department of State is so concerned about such meddling that they have offered 10 million dollars as a reward for any information that can identify personalities threatening the integrity of the election process.
While that is a hefty reward and a lot of money, that alone would not fix this problem or rule out the risk of outside interference coming from politically motivated Advanced Persistent Threat (APT) groups. Just recently, Google reported two different phishing campaigns eyeing up the members of both Biden’s and Trump’s groups, allegedly APT groups from China and Iran, in that order.
Phishing is a constant threat and a time-proven method to gain the initial foothold into the camp of one of the candidates. Still, APT groups have other, more subtle, more powerful means to meddle with the voting without compromising the voting systems or candidates directly.
I will focus on the different risk factors in this post ‒ influence campaigns and synthetic media attacks.
“Fake News” Is Not Just a Catchphrase, It’s a Reality
“Fake news” is not only a favorite phrase of Donald Trump, but it is also a dangerous weapon that can swing the momentum or completely derail the campaign of the target.
And would you know it, it doesn’t take much skill or knowledge to generate those weapons automatically. Open-source artificial intelligence is exploding and making life easier for online criminals involved in the creation, amplification, and distribution of various malicious media through social media platforms.
FireEye researchers showcased how easily open-source tools can be used to produce synthetic content ‒ or simply artificial audio and video files, as well as simple images that tend to put the target in the defaming light. Synthetic media can be a potent threat not only because it dictates the narrative through a format of “visual evidence,” but also because it spreads faster and can become viral organically.
“Fine tuning for generative impersonation in the text, image, and audio domains can be performed by nonexperts… [and] can be weaponized for offensive social media-driven information operations,” shared Philip Tully and Lee Foster.
Media-driven information operations transpire on the biggest media platforms we visit, such as Twitter, Facebook, Youtube, Instagram. They are undeniably useful for the hackers, as discussed in my blog post called “Why Youtube Is a Good “Crutch” For Carrying Out Cyberattacks.”
This is not a case of deep fake videos that a naked eye could detect, but more believable pieces of content, like StyleGAN2 modified photos (all those people you could see in the linked video don’t exist) or SVT2TTS produced audio files that can be generated from just 30 seconds of sound snippets.
Credit to FireEye for the image.
It is not a problem to find photos or soundbites of someone who is a public figure and gives public speeches on a daily basis. It is also not a problem to feed a lot of slightly cropped and reformatted photos to the algorithm to get the best possible results. Fine-tuning the audio is trickier, as results could come out to be slightly robotic, but... there’s a case of them being successful already.
Even though the target grew suspicious about the legitimacy of the boss's voice, the attacker just reproduced the accent and voice timbre of the supposed boss to get away with the $243,000 haul. It’s scary that someone could generate a believable enough voice to call for such a big transfer, and we are left wondering if the algorithms tuned by the more skilled and politically motivated hackers could cause similar or bigger damage.
The thing that makes synthetic media so potent is that it doesn’t have to be perfect to have the desired effect, just good enough to influence people who are foreign to the habits of deeper inspection, no pun intended.
The success of an information operation largely comes down to how well the content will evade detection, how much emotion it will instigate, and how many real people will “buy into it.”
Fancy Bear Outclasses Deep Panda?
To better understand how fake content can be weaponized, we need to cut down the typical informational operation into a few steps.
First of all, groups create and compromise thousands of fake accounts. In some cases, there are also overnight transformations of popular-for-some-reasons social media accounts. Accounts that were posting K-Pop GIFs just yesterday could be spreading highly political messages tomorrow, and that applies to other popular channels too.
The generated content and profiles then come together to spread the core message, sometimes successfully, sometimes not. News are then posted to bogus websites and shared inside this huge synthetic network, garnering enough attention to become viral and get into the mass-media news.
Platform users do the rest of the job for the attackers, spreading the information organically. It really doesn’t take much to get people talking.
The analysis of multiple campaigns showed that nation-state actors could make obvious mistakes too. For example, attempts to downplay and disregard the COVID spread by the Chinese state were not very successful. The state prioritized the number of fake social media accounts over their quality and failed to capture the organic human reaction the campaign was intended for.
Stanford Internet Observatory also disclosed that 92% of fake accounts tied to China-influence campaigns had less than ten followers. You can read more about China's information operations in their new white paper.
There has also been information about Twitter taking down a Dracula botnet composed of around 3000 fake accounts that were harmless in their nature, but were used to amplify the political messages of the Chinese state and get predetermined topics trending.
Russian state represented by the Fancy Bear APT did a much better job in their own campaigns, as Fancy Bear combined their hacking skills with targeted profiling of impassioned groups. Another thing that can maximize engagement is the abuse of algorithms that cater dubious messages to the bigger groups of people at the best possible time. Something to consider for Deep Panda leaders, probably?
A well-orchestrated disinformation campaign is not rare, and it can passively maintain itself for a long time. As you can imagine, platforms can do little to prevent them, and they can also get distracted by the more pressing issues I have written about a few weeks ago.
Twitter just showcased how social engineering can enable attackers to hijack identities of high-profile personalities and give them the stage to spread any message they want. Twitter is really lucky that the fraudster wanted some money, and not an international conflict, but nothing stopped the perpetrator from doing that.
Perhaps there’s a silver lining for Twitter in that hack ‒ they will hopefully be more prepared for account takeovers in the future. But it seems like platform users overall will not be more cautious about things when they see something popular coming from verified users. Something as insignificant as a blue tick can still act as the check of authority, and it can sway mass opinions, divide groups, and damage the targets' reputations.
Social media is a big thing, and when mismanaged or manipulated, it can lead to sour consequences. When manipulated by the most skilled hackers in the world that are well-protected, those consequences scale to nation-wide problems.So, to quote a classic: “Believe only half of what you see and nothing that you hear.”