Google bans deepfake-generating AI from Colab

Deepfakes-related work is included on the list of forbidden projects in the amended terms of usage, which were noticed over the weekend by Unite.ai and BleepingComputer.

Google bans deepfake-generating AI from Colab

Table of Contents

Toggle

On its Google Colaboratory platform, Google has made it illegal to train AI systems that can make deepfakes. Deepfakes-related work is included on the list of forbidden projects in the amended terms of usage, which were noticed over the weekend by Unite.ai and BleepingComputer.

In late 2017, Colaboratory, or Colab for short, grew out of an internal Google Research initiative. It’s intended for anyone to use a web browser to create and run arbitrary Python code, notably programming for machine learning, teaching, and data analysis. Google makes hardware available to both free and paid Colab users, including GPUs and Google’s custom-designed, AI-accelerating tensor processing units (TPUs).

Within the AI research community, Colab has become the de facto venue for demos in recent years. Researchers who write code frequently provide links to Colab pages on or alongside the GitHub repositories where the code is hosted. However, Google hasn’t always been stringent about Colab material, which might open the door for people who want to exploit the site for less than ethical objectives.

When multiple users of the open-source deepfake generator DeepFaceLab received an error warning after attempting to run DeepFaceLab in Colab last week, they became aware of the terms of use modification. “You may be executing code that is prohibited, which may limit your ability to utilize Colab in the future,” the warning stated. Please go to our FAQ for a list of acts that are not permitted.”

The warning isn’t triggered by every code. This reporter was able to successfully run one of the most popular deepfake Colab projects, and Reddit users say that another prominent deepfake Colab project, FaceSwap, is still operational. This implies that enforcement will be based on blacklists rather than keywords and that it will be up to the Colab community to report code that violates the new rule.

“While supporting our objective to offer our users access to significant resources such as TPUs and GPUs, we continually evaluate avenues for misuse in Colab that run opposed to Google’s AI ideals.” In response to our frequent evaluations of harmful behaviors, deepfakes were added to our list of activities forbidden from Colab runtimes last month,” a Google representative told via email. “Preventing misuse is a never-ending game, and we can’t provide particular ways since counterparties may use the information to bypass detection systems.” We have automatic technologies that detect and prevent various sorts of misuse in general.”

According to data from Archive.org, Google secretly modified the Colab agreements in mid-May. The prior prohibitions on things like denial-of-service attacks, password cracking, and torrent downloads remain in place.

Deepfakes may take various forms, but one of the most prevalent is films in which a person’s face has been successfully superimposed over the face of another person. In certain circumstances, AI-generated deepfakes may replicate a person’s body motions, microexpressions, and skin tones better than Hollywood-produced CGI, unlike the crude Photoshop efforts of the past.

As several popular videos have demonstrated, deepfakes may be innocuous — even enjoyable. Hackers, on the other hand, are increasingly using them to target social network users in extortion and fraud operations. They’ve also been used for political propaganda, such as creating movies of Ukrainian President Volodymyr Zelenskyy making a speech on the war in Ukraine that he never did.

Rising deepfake scams

According to one report, the number of deepfakes online increased from 14,000 to 145,000 between 2019 and 2021. Deepfake fraud scams are expected to cost $250 million by the end of 2020, according to Forrester Research.

“The issue that’s most pertinent when it comes to deepfakes especially is an ethical one: dual usage,” Vagrant Gautam, a computational linguist at Saarland University in Germany, explained in an email. “Thinking of weapons or chlorine comes to mind.” Chlorine has been employed as a chemical weapon as well as a cleaning agent. So we deal with it by first considering how horrible the technology is, and then agreeing, for example, on the Geneva Protocol not to deploy chemical weapons against one another. Unfortunately, there are no industry-wide consistent ethical practices for machine learning and AI, but it makes sense for Google to develop its own set of conventions governing access to and ability to create deepfakes, especially since they’re frequently used to disinform and spread fake news, which is a serious problem that is only getting worse.”

Google’s decision to restrict deepfake projects from Colab was also praised by Os Keyes, an adjunct professor at Seattle University. However, he pointed out that more has to be done on the policy front to avoid their emergence and spread.

Keyes told that, “The way that it has been done obviously exposes the poverty of depending on corporations’ self-policing.” “Deepfake creation should not be an acceptable kind of job anyplace, and it’s wonderful that Google isn’t joining in… The prohibition, however, does not take place in a vacuum; it takes place in an atmosphere where effective, responsible, and responsive oversight of these types of development platforms (and corporations) is missing.”

Others, especially those who benefited from Colab’s former laissez-faire governance strategy, may disagree. OpenAI, an AI research center, initially refused to open source GPT-2, a language-generating model, for fear of it being exploited. This prompted organizations such as EleutherAI to use technologies like Colab to create and disseminate their own language-generating models, ostensibly for research purposes.

Connor Leahy, a member of EleutherAI, said that the commoditization of AI models is part of an “inevitable trend” in the lower cost of producing “convincing digital content” that will continue regardless of whether the code is shared. AI models and tools, he believes, should be readily available so that “low-resource” users, such as academics, may better analyze and conduct their own safety-focused research on them.

“Deepfakes have the potential to go against Google’s AI ideals in a big way.” We hope to be able to distinguish between harmful and benign deepfake patterns and will adjust our policies as our approaches improve,” the representative stated. “Users who want to try out synthetic media projects in a safe fashion can speak with a Google Cloud professional to verify their use case and see if other managed computing services in Google Cloud are a good fit.”

Exit mobile version