Leading voices in the music industry and other creative fields are lauding the introduction of a new bill in the US House of Representatives that aims to protect people from having their image and voice used in AI-generated deepfakes.
The No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act was brought forward on Wednesday (January 10) by a bipartisan group of House Representatives led by Democrat Rep. Madeleine Dean of Pennsylvania and Republican Rep. Maria Salazar of Florida.
The bill goes a long way to establishing a “right of publicity” at the federal level in the United States.
A right of publicity is an intellectual property right that protects against the unauthorized use of a person’s likeness, voice or other aspects of their identity. Unlike many other intellectual property rights, the right of publicity isn’t consistently recognized in laws around the world, or even within the US.
Of 50 US states, 19 have a law explicitly recognizing the right to publicity in some form, including California, New York and Florida, while another 11 states have recognized publicity rights as a matter of common law.
The No AI FRAUD Act establishes “an intellectual property right that every individual holds over their own likeness and voice, allows individuals to seek monetary damages for harmful, unauthorized uses of their likeness or voice,” and “guards against sexually exploitative deepfakes and child sexual abuse material,” according to a statement from Rep. Dean.
“PUTTING IN PLACE GUARDRAILS LIKE THE NO AI FRAUD ACT IS A NECESSARY STEP TO PROTECT INDIVIDUAL RIGHTS, PRESERVE AND PROMOTE THE CREATIVE ARTS, AND ENSURE THE INTEGRITY AND TRUSTWORTHINESS OF AI.”
MITCH GLAZIER, RIAA
The law seeks to “balance the rights against the First Amendment to safeguard speech and innovation,” according to a fact sheet on the proposed law, circulated by the sponsoring representatives.
“From an AI-generated Drake/The Weeknd duet, to Johnny Cash singing Barbie Girl, to ‘new’ songs by Bad Bunny that he never recorded, to a false dental plan endorsement featuring Tom Hanks, unscrupulous businesses and individuals are hijacking professionals’ voices and images, undermining the legitimate works and aspirations of essential contributors to American culture and commerce,” the fact sheet stated.
Republican House Rep. Rob Wittman of Virginia, a co-sponsor, said the proposed legislation “is a crucial first step in safeguarding the intellectual property of our artists and creators from bad actors who may try to exploit their work.”
The proposed legislation has garnered the approval of a number of prominent voices in the music industry, including Recording Industry Association of America (RIAA) Chairman and CEO Mitch Glazier.
“The No AI FRAUD Act is a meaningful step towards building a safe, responsible and ethical AI ecosystem, and the RIAA applauds Representatives Salazar, Dean, Moran, Morelle, and Wittman for leading in this important area,” said Glazier.
“To be clear, we embrace the use of AI to offer artists and fans new creative tools that support human creativity. But putting in place guardrails like the No AI FRAUD Act is a necessary step to protect individual rights, preserve and promote the creative arts, and ensure the integrity and trustworthiness of generative AI.
“As decades of innovation have shown, when Congress establishes strong IP rights that foster market-led solutions, it results in both driving innovation and supporting human expression and partnerships that create American culture.”
“THE MOST UNIQUE AND FOUNDATIONAL ASPECTS OF ANY PERSON’S INDIVIDUALITY SHOULD NEVER BE MISAPPROPRIATED OR USED WITHOUT CONSENT.”
DR. MOYA MCTIER, HUMAN ARTISTRY CAMPAIGN
The Human Artistry Campaign, a broad coalition of musicians’ and artists’ groups launched in 2023 to ensure that AI will not replace human culture and artistry, also threw its support behind the bill.
“The most unique and foundational aspects of any person’s individuality should never be misappropriated or used without consent,” Human Artistry Campaign Senior Advisor Dr. Moiya McTier said in a statement, adding that the bill is “a massive step forward in protecting people, culture, and art – while also urging other policymakers to follow their lead to shield us all from voice, image and likeness manipulation.”
The bill also gained the support of Universal Music Group (UMG) Chairman and CEO Sir Lucian Grainge.
“Universal Music Group strongly supports the ‘No AI FRAUD Act’ because no one should be permitted to steal someone else’s image, likeness or voice,” Grainge said in a statement.
“While we have an industry-leading track record of enabling AI in the service of artists and creativity, AI that uses their voice or identity without authorization is unacceptable and immoral. We call upon Congress to help put an end to nefarious deepfakes by enacting this federal right of publicity and ensuring that all Americans are protected from such harm.”
UMG, the world’s largest music rightsholder, has been among the most vocal proponents of establishing a federal right of publicity over the past year, as AI tools and capabilities have expanded at breathtaking speed.
At a hearing of the US Senate Judiciary Committee’s subcommittee on intellectual property in July 2023, Jeffrey Harleston, UMG’s General Counsel and Executive Vice President for Business and Legal Affairs, called for a federal right of publicity as one of three new policies the company would like to see enacted in light of the AI boom.
“WHILE WE HAVE AN INDUSTRY-LEADING TRACK RECORD OF ENABLING AI IN THE SERVICE OF ARTISTS AND CREATIVITY, AI THAT USES THEIR VOICE OR IDENTITY WITHOUT AUTHORIZATION IS UNACCEPTABLE AND IMMORAL.”
SIR LUCIAN GRAINGE, UNIVERSAL MUSIC GROUP
The other two are the ability of copyright owners to see what has gone into the training of AI models, and the labeling of AI-generated content – something that social media companies like YouTube and TikTok are moving towards.
The new House bill follows the release last October of a “discussion draft” of a bill in the US Senate, titled the No AI FRAUD Act. While aiming to achieve similar ends, that proposed bill focuses more directly on businesses involved in unauthorized deepfakes.
It would “hold individuals or companies liable if they produce an unauthorized digital replica of an individual in a performance,” and “hold platforms liable for hosting an unauthorized digital replica if the platform has knowledge of the fact that the replica was not authorized by the individual depicted.”
Like the new House bill, the Senate bill was also brought forward by a bipartisan group of lawmakers, composed of Democratic Sen. Chris Coons of Delaware, Republican Sen. Marsha Blackburn of Tennessee, Democratic Sen. Amy Klobuchar of Minnesota and Republican Sen. Thom Tillis of North Carolina.
Post Views: 1,510