Don’t believe your eyes – Deepfake’s impact on the pay-TV industry

Using machine learning to undertake advanced audio and video manipulation is resulting in increasingly realistic deepfake content that the average consumer, or sometimes even an expert believes to be genuine.

While deepfake started out as simply a geek’s toy – think of it as photoshop for video – it has quickly evolved and we can already predict how it might influence many aspects of our lives. Governments, lobbyists, rights owners, content providers, news providers and other D2C providers are starting to see the threats and opportunities presented by deepfake.

The shadow economy, also known as the criminal economy, is watching this space closely.  Known to be technology early adopters, the cyber crime world is certain to ramp up its use of deepfake to make money. This usage is likely to include porn and the use of fake identities. For example, last year criminalsimpersonateda chief executive’s voice and demanded a fraudulent transfer of €220,000.

Here, we examine the ways video pirates are expected to use deepfake and how the video industry should respond.

Deepfake falls into two clusters of activity: in one, the end user is unaware of the deception, while in the other they are seeking it out and enjoy it.

The formation of false consciousness

The criminal minds behind video piracy obey no rules and their only objective is making money.  Deepfake presents an opportunity for them to start “audience selling” to any organisations who want to use manipulated video to promote their interest.

For example, deepfake is a powerful propaganda tool for any government or group with an interest in promoting fake news to specific national or international audiences . They can use video pirate services as a platform for distributing manipulated news to viewers.  They can take video from a legitimate news channel (BBC, Sky News, CNN etc.), manipulate it to serve their propaganda and disinformation purposes, and use the pirated services to reach viewers.

Government involvement in video piracy is not new, per the recent dispute between beIN and beoutQ, and we expect governments with no scruples to see deepfake as an opportunity to change content to suit their political agenda.

But it is not just about governments. There are other malicious actors who may try to forge a new digital reality – especially when the target audience is those younger viewers who are the main subscribers of pirate services.  Examples vary from terror organizations  to racist organizations like the White Supremacist Telegram Channel.

And it is not just about the news. Soon it will be possible to manipulate movies or TV series to deliver a message that wasn’t part of the original script.  For example, organizations that want to encourage xenophobia will find a way to insert those messages into movies.

Manipulating copyrighted content

The ability to decide who will deliver the news to you, who will play the James Bond or the evil villain, and so on might be seem as an attractive feature by many people.    But, while the legitimate content providers will not do this for obvious reasons, the pirates that already promote their service as “A completely new and cool TV experience” will be there to feed their appetite.   This technology is expected to become widely available and low cost in the next year or so, and any pirate will be able to do it.  While some deepfake content will be of medicore quality, some of it will prove very attractive to subscribers. And the danger here is that pay-TV providers could see considerable churn as their VOD customers leave to seek out these new TV experiences.

Some groups will want to undertake deeper modifications to the content than just face-changing or speech edits.  Here we are talking about people who enjoy illegal material such as snuff movies and child pornography.  They would be natural customers for darkWeb video pirates, who will provide them with what they need – by turning wholesome PG movies into something horrible.

The end result will be that a growing number of people, especially but not limited to younger audiences, will become cord-cutters and turn instead to pirate networks. Not because of money or access but because the content offered is perceived to be more cool and fun.While there is a direct threat to the revenues of content and service providers, there is a far broader threat to society as a whole.

Of course, these video and audio manipulation tools will also benefit studios and other legitimate content providers –  for example, for editing misspoken lines without having to rerecord footage, or to create seamless foreign language dubs. So banning the technology is just not an option.

Addressing the deepfake danger head on

So what can the industry do to prepare for the deepfake onslaught? Regulation and tech tools both have an important role to play here in preventing, identifying and combatting illegal video manipulation.

Two types of regulation are required. The first is to ensure that any manipulated content isclearly presented as such to the viewer. This could take the form of either an opening /ending statement in the video (“This video has been edited”) or an overt message on the video frames that have been manipulated.The second is to ensure that any video that contains unacknowledged deepfake content is treated in the same way as any other illegal video – meaning that ISPs shouldn’t carry it, and social networks are obligated to remove/ban it.

In parallel, the industry needs to start defining the appropriate technology tools for every part of the video distribution chain.   As this is a fairly complex framework, a sensible approach would be to start by creating voluntary standards bodies that can define the tools and APIs. Then the tech companies can start developing tools that meet these de facto standards to ensure interoperability.

For example, one tool might focus on authenticity – identifying the probability in percentage terms that a video has been manipulated; another might ensure that any video manipulation had been undertaken by the legal owner; and yet another might compare two or more similar videos to identify the original version.

Summary

As if the current threat from piracy is not enough to deal with, this new threat is emerging, fueled by AI-supported video manipulation technology. Now is the time to start developing a comprehensive anti-deepfake strategy: amending current regulations, building stakeholder coalitions, and creating innovative technologies and concepts that will help the video industry keep control and also keep viewers safe. The industry and regulators will need time to prepare for these changes but the sooner this starts the sooner we can all take concerted action to outsmart the deepfake criminals.

Yossi Tsuria is security CTO, Synamedia

Read Next