Blog | 24By7Security

How your business could be affected, and what you should do

Written by Rema Deo | June, 15 2021

How Your Business Could Be Affected, and What You Should Do

In a haunting song aptly titled “Magic,” Bruce Springsteen describes various classic illusions, from rabbits in hats and palmed coins to underwater escapes from chained boxes. Magic lyrics caution us to “Trust none of what you hear, and less of what you see.”

Released in 2007, Magic eerily anticipated the dark magic of the deepfake. A decade later, deepfakes have attempted to ruin reputations through malicious impersonations, spread disinformation and fake news, and engender confusion, distrust, and skepticism. In some cases, they have succeeded. On the plus side, deepfakes will take the film entertainment industry to the next level, in much the same way that computer-generated imagery (CGI) has done.

It is only by learning more about deepfakes that we will be able to recognize a deepfake exploit when we see one and realize that we cannot trust what we see. Experts agree that the more of us who understand the deepfake, the better for all of us.

What is a Deepfake?

In its simplest definition, a deepfake occurs when an image or voice of an individual in a picture, video, or film is replaced with the image or voice of a different individual – hence the term “fake.”

Deepfakes employ very modern technologies, including machine learning and artificial intelligence, “to manipulate or create visual and audio content with a high potential to deceive,” according to Wikipedia.

Another resource, Tession, defines a deepfake as “a fraudulent piece of content (video or audio recording) that has been manipulated or created using artificial intelligence.”

In both definitions, the words “manipulate,” “deceive” and “fraudulent” speak to the typically malicious intent of the deepfake.

How Did the Deepfake Originate?

1997 – Video Rewrite Program

One of the earliest initiatives utilizing technology to alter reality was the Video Rewrite program, introduced in an Interval Research Corporation paper in 1997. 

Taking old film footage of an actor speaking a part, Video Rewrite automatically inserted different dialog, using machine learning techniques to make minor manipulations of the muscles of the mouth and jaw so that the actor seemed to be actually speaking the new words.

 Video Rewrite was the first software program to automate facial reanimation for this purpose.

According to the research paper, “modifying and reassembling such footage in a smart way and synchronizing it to the new soundtrack leads to final footage of realistic quality, without labor-intensive interaction.” The movie industry clearly had much to gain from this and similar software applications in bringing films to market more efficiently.

2017 – Synthesizing Obama Program

Fast forward to 2017, when a software program dubbed Synthesizing Obama manipulated video footage of the former president to put words in his mouth from a separate soundtrack spoken by another individual. Again, the mouth and jaw areas were modified so that he appeared to be actually speaking the new dialog. Posted on YouTube, this example helped alert a wider audience to the fraudulent potential of the deepfake, especially in politics and government.

In two decades, advances in machine learning and artificial intelligence have produced deepfakes that are increasingly more realistic, and thus more believable and potentially more damaging.

What Damage Can a Deepfake Do?

Celebrity Deepfakes

One of the most popular deepfakes, perhaps because it is one of the easiest to create, is the attachment of celebrity faces to the bodies of other individuals, usually porn stars. On the Internet, deepfake pornographic videos increased 84% in the first half of 2019, according to Amsterdam-based cybersecurity company Deeptrace.

This particular type of deepfake is defamatory and salacious, and in Virginia and California, it is now a crime. In July 2019, Virginia enacted criminal penalties against distributors of non-consensual deepfake pornography. In October 2019, California enacted a new law enabling victims of non-consensual sexually explicit deepfakes to bring legal action against the content creators.

Political Deepfakes

Politicians and presidents are other popular targets of the deepfake. Joe Biden has been spoofed several times already in 2021, although those deepfake efforts have been widely criticized for their amateurish quality. Donald Trump, Vladimir Putin, Angela Merkel, and other political figures have been victimized as well. Deepfake exploits targeting politicians have the potential to inflict serious damage to the individuals, their constituents, and the electoral process by creating phony narratives to influence public opinion, including fake campaign messages delivered by broadcast phone calls.

In September 2019, Texas enacted a law banning the creation and distribution of deepfake videos intended to harm political candidates or influence elections. In October 2019, California enacted similar legislation.

Corporate Deepfakes

Business executives have been victimized by deepfakes as well, primarily for criminal financial gain. In August 2019, the Wall Street Journal published an article describing how cybercriminals had used artificial intelligence (AI) software to impersonate an executive’s voice. Several months earlier, in March, the CEO of a German company had telephoned the CEO of its subsidiary in the UK to direct the immediate and urgent transfer of €220,000 ($243,000) to a Hungarian supplier.

The subsidiary CEO believed he was speaking to the parent CEO because he recognized the slight German accent and individual’s unique voice modulation. As it turned out, it was the first known case of an audio-only deepfake or deepfake voice phishing. And it worked, at least the first time. Funds wired to Hungary were quickly moved to other locations by the cybercriminals. The subsidiary CEO became suspicious when an additional transfer request was made by the parent CEO and refused to make the second transfer. This ground-breaking deepfake was a social engineering exploit, which promises to be the next frontier for cybercriminals seeking quick financial gains.

Healthcare Deepfakes

The healthcare industry has long been a favorite target of cybercriminals, and the implications of deepfakes in healthcare are disturbing. In one notable example, a hacker was able to remove evidence of lung cancer from a patient’s digital CT scan; the altered or faked scan deceived three radiologists and an advanced AI-driven lung cancer detection program. Fortunately, this was an experiment conducted at a hospital by White Hat hackers, but the message is no less scary.

In their presentation to the 2019 USENIX Security Symposium, the authors of this experiment painted a grim picture. “An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market,” they said. “An attacker can use deep-learning to add or remove evidence of medical conditions from volumetric (3D) medical scans. An attacker may perform this action in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder.”

While Virginia, Texas, and California have enacted legislation specific to deepfakes, other states have incorporated deepfake exploits into their overall anti-cybercrime laws.

Federal Law Related to Deepfake Exploits

2019 was a watershed year for the deepfake, with countless exploits proving that it had become an alarming emerging threat on many fronts. The obvious potential for foreign deepfakes to disrupt U.S. national elections and national defense is even more disturbing than foreign ransomware hacks of U.S. infrastructure, most recently affecting Colonial Pipeline and JBS.

In December 2019, President Trump signed into law the National Defense Authorization Act for FY 2020 (NDAA), a $738 billion defense bill that includes significant deepfake components. For example, NDAA Section 5709 requires reporting of intelligence on foreign weaponization of deepfakes, including disinformation campaigns targeting U.S. elections and other political processes. NDAA Section 5724 creates a competition encouraging the development of technologies and tools that will enable effective deepfake detection with the goal of deterring deepfake exploits.

Ultimately, the detection of deepfakes will help to reduce the number of deepfake exploits and their damaging consequences. In the meantime, there are actions that businesses, industries, and government entities can take to begin to shore up their defenses against deepfake exploits.

Security Steps To Take Immediately

Deepfake exploits will continue to evolve for malicious purposes, including fraud, disruption, defamation, and theft, in much the same way that cybercrimes have continued to evolve. 

Employee awareness, authentication protocols, response plans, insurance, and detection are key components of a current security program for any organization.

Employee training

As the weakest link in the security chain, employees need to be educated in what deepfakes are, how they can be used to fool people, and how they can hurt the business. Examples should be provided, including video and audio deepfakes, so that employees can learn what clues to look for. With security awareness training, employees will become more suspicious and thus be better able to spot a deepfake. Training must include actions employees should take if they are suspicious or believe they have been victimized. The sooner an exploit is reported, the faster the organization can manage it.

Authentication Protocols

Many deepfakes in business are expected to be social engineering schemes that attempt to fool employees into sharing information they shouldn’t. Develop a protocol for employees to authenticate a deepfake call or email through a second medium. If an employee receives a suspicious phone call claiming to be from the company human resources officer, for example, the employee should be able to contact the officer via email or to call back the officer to validate the original call. It’s the same principle as requiring multifactor authentication for access to systems, networks, or applications.

Cyber insurance

Today, many insurance companies offer cyber insurance against the loss of data, brand reputation, and other damages resulting from a cyberattack or security breach. Existing policies should be updated to include losses from deepfake exploits. Organizations without cyber insurance should seriously consider obtaining a policy that includes emerging threats, such as deepfakes.

Incident response plan

Every organization requires a documented plan for responding to and recovering from cybercrimes, such as ransomware attacks and database hacks. An incident response plan now needs to include potential deepfake exploits. The plan should include internal and external message management and damage control as well as actions assigned to individuals and teams, and escalation hierarchies. Business continuity and recovery are the primary goals of any disaster response plan.

Detection software

Although not part of mainstream solutions quite yet, AI software is being used by Microsoft and Facebook, as just two examples, to detect deepfake videos on their platforms so they can be removed and studied. Machine learning techniques have evolved to identify phishing emails by detecting suspicious anomalies. Machine learning is also being applied to help detect audio phishing exploits. In response to this emerging threat, it is only a matter of time before standard information technology and security tools will incorporate deepfake detection and prevention functionality.

Summary

Deepfakes are an emerging and very real threat to government, industry, and business. By manipulating audio and visual content to make it appear that an individual is saying or doing something that he or she did not say or do, deepfake creators can ruin reputations through malicious impersonations, spread disinformation, and fake news, and engender confusion and distrust.

Several states have already enacted laws criminalizing deepfakes and their creators. At the federal level, the National Defense Authorization Act (NDAA) of 2019 requires the reporting of intelligence concerning foreign deepfakes that potentially threaten U.S. political processes, elections, or national defense. The NDAA also urges the research and development of deepfake detection tools. Deepfakes utilize artificial intelligence, often combined with machine learning, and these technologies are likely to become important weapons in the detection and exposure of malicious deepfakes. In the meantime, there are several immediate actions organizations can take to strengthen their own defenses against the dark magic of the deepfake.