Deep Fakes vs. the Truth: Putting Brands at Risk
Published: June 2, 2021
Michelle Gorton Gorton IP Sydney, Australia Brands and Innovation Committee—Content Subcommittee
Maggie Yang Corner Stone & Partners Beijing, China Brands and Innovation Committee—Content Subcommittee
Simone Villaca Remer Villaça & Nogueira São Paulo, Brazil Brands and Innovation Committee—Content Subcommittee
With deep fakes, seeing is no longer believing! Imagine a website where identities are created for people who do not exist. Then these are programmed to do and say things with the intent to commit a crime or to harm a reputation of a well-known person or business. Such websites exist and their numbers are rapidly growing.
So too is the number of deep fake videos. Given our reliance on the Internet, and especially during the ongoing COVID-19 pandemic when face-to-face meetings are no longer the norm, seeing an image of a person on a screen giving you information or directing you to do something is common and can be easily believable.
The emergence and proliferation of deep fakes is putting brand owners and the public at risk. Given this trend, the U.S. Federal Bureau of Investigation (FBI) recently warned that deep fakes are the next big global cyber threat. Moreover, it is a great concern that in the future, deep fakes will be undetectable to the general public.
Artificial intelligence (AI) technology is used to create deep fake videos, often combining the likeness of one person with another in digital or video images—for instance, putting a face on another body or even impersonating a famous person using voice clones to mimic that person’s voice.
A generative adversarial network is another way a deep fake may be created, using a set of algorithms to train itself to recognize patterns. The first set, known as a generator, turns random noises into an image. A second set, known as the discriminator, combines the synthetic image with real images. Essentially, one algorithm creates the fakes and the other teaches the synthesis engine to improve the quality of the fakes. This process ultimately allows the generator to create a realistic image.
It has become easier to make increasingly believable deep fakes, while at the same time, deep fakes are getting more difficult to detect.
Impact on Brand Owners
The use of a trademark in a deep fake video puts brand owners in peril. Combining the trademark with potentially negative sentiment communicated in the video stands to damage the brand’s reputation in the marketplace.
For example, a competitor could disparage the functionality of another brand owner’s product or show a celebrity criticizing a brand. This use may amount to an action for trademark infringement, to be brought by the affected brand owner, if the owner can in fact locate the source of the deep fake. The takeaway for brand owners is to remain vigilant in the marketplace and to act quickly.
Behind the Deep Fakes
The list of those using deep fakes continues to increase and will expand as the technology grows. So far, the leading users are amateurs, researchers, porn producers, visual effect studios, and governments. The latter, perhaps a surprise member of the list, use deep fakes to combat cybercrime or extremist groups.
One of the more famous examples of a deep fake video is of former U.S. President Barack Obama speaking about the dangers of false information and fake news. When it appeared in 2017, the video was so life-like that it caused great debate about the dangers of AI technology. Another example is a deep fake video of Facebook Founder and CEO Mark Zuckerberg posted on Instagram, claiming that Facebook owns its users.
Deep fake detection usually has the following benchmarks, which—interestingly—can be assessed by either a trained human eye or a sophisticated machine or software.
- Unnatural eye movement and facial expressions (for example, humans typically blink their eyes every two to eight seconds);
- Awkward facial feature positioning or body language; and
- Hair or teeth that do not look real.
The use of a trademark in a deep fake video puts brand owners in peril.
It is useful to master some skills on how to detect deep fakes, but as deep fake technologies develop, detection may finally evolve into an endless cat-and-mouse game. To protect our society and ourselves from deep fakes, it is necessary for various parties to adopt comprehensive measures. The following are four perspectives that can be considered.
Legislation and Government
Legislation is the most visible and commonly accepted mechanism to impose social control and manage public behavior. As deep fakes may infringe upon people’s privacy, copyright, and other rights, they may be regulated by a country’s existing laws, such as civil code, tort law, copyright law, or criminal law. Some countries passed targeted legislation to make some types of deep fakes illegal.
For example, in October 2019, the State of California in the United States passed Assembly Bill No. 602, making it illegal to use human image synthesis technology to produce pornography without the consent of the people depicted. It also adopted Assembly Bill No. 730, making it illegal to circulate deep fake videos, images, or audio of candidates within 60 days of an election.
In China, on November 18, 2020, the Cyberspace Administration, Ministry of Culture and Tourism, and the National Radio and Television Administration jointly issued the Administrative Provisions on Online Audiovisual Information Services. Effective January 1, 2021, the provisions require prominent labeling of any online publication of media that has been altered by using AI or virtual reality (VR) techniques.
To protect our society and ourselves from the negative impact of deep fakes, it is necessary to adopt comprehensive measures.
It has become easier to make increasingly believable deep fakes, while at the same time, deep fakes are getting more difficult to detect. Despite that, technology can still be effective in stemming the generation and spread of deep fakes.
Some technology companies, such as tech giant Microsoft and startups Truepic and Deeptrace, have begun developing technologies to help combat deep fakes. The following are some current anti-fake technologies.
- Using AI and blockchain to register a tamper-proof digital watermark or fingerprint for authentic videos. In September 2020, Microsoft announced its new anti‒deep fake technology, called Microsoft Video Authenticator, which uses AI to determine the likelihood that a photo or video has been manipulated.
- Add signatures inside file metadata. Microsoft is in the process of releasing a series of digital signatures that can be embedded into a video’s encoding to verify its authenticity.
- Insert specially designed digital “artifacts” into videos to hide the patterns of pixels. This helps slow down the deep fake algorithms, resulting in poor quality and making it less likely that the deep fakes will be generated successfully.
Censorship of Major Tech Platforms
Users of deep fake videos are targeting major tech platforms like Facebook, Google, and Twitter in the United States, or Weibo or Douyin (the Chinese domestic version of TikTok) in China, as ideal places to publish their videos and draw wide public attention. Accordingly, governments may impose the responsibility for censorship on platform operators. However, there is concern that this approach does not go far enough, making the tech platforms overwhelmingly responsible for limiting the spread of deep fakes on their sites.
Information and Education
Ultimately, it is people who generate, spread, and detect deep fakes. Apart from legislation and technology, educating people about deep fakes can be an efficient countermeasure. In this regard, Microsoft said in its anti‒deep fake tech launch that “education is the best strategy.”
However, detection methods will sooner or later be outdated. Some non-exhaustive suggestions to combat deep fakes include raising public awareness of deep fake issues; educating Internet users about how deep fakes work and the harm they cause, as well as how to spot deep fakes; and trusting in quality news sources while remaining vigilant regarding their authenticity.
Though these suggestions may still be insufficient to prevent deception, they can, to a large extent, help brand owners and the public defend against the challenges and dangers of deep fakes.
However, it cannot be overstated that the effects of deep fake videos on society—and the risk to brands—are truly profound because of the gravity of harm that could be brought on entire populations. As detection gets harder, the consequences of deep fake videos could be catastrophic. We must continue the fight to advance technology to keep up before it is too late.
Although every effort has been made to verify the accuracy of this article, readers are urged to check independently on matters of specific concern or interest.
© 2021 International Trademark Association
These cookies are used to identify a user’s browser as the visitor goes from page to page on the Site. These are session cookies, which means that the cookie is deleted when you leave the Site. It is an integral piece of the Site software and used to let the server know which users are on the Site at any given time and make certain parts of the Site easier to use.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
These cookies are used to collect information about how visitors use our Site. The cookies collect information in anonymous form, including the numbers of visitors to the Site, where visitors have come to the Site from, the pages they visited and how they have interacted with tools on the Site like search and embedded media players. We use the information to compile statistical reports of our users’ browsing patterns so that we can improve the Site.
Please enable Functionality Cookies first so that we can save your preferences!
These cookies are used to deliver advertising relevant to the interests of visitors to our Site. They are persistent, which means they will remain on your device after you leave the Site.
- Facebook (Ad Pixel)
- Google (Ad Pixel)
- LinkedIn (Ad Pixel)
- Quattro Anonymous
Please enable Functionality Cookies first so that we can save your preferences!