AI Deep Fakes are no April Fool’s Joke

Posted by

And so it begins. As predicted with the rapid and primarily uninhibited onset of AI, the occurrence of deep fakes is taking shape in cyberspace, causing online turmoil in a rising swell of hijacked identities and misrepresented brands.

Paradoxically, a new reality of the revolutionary technology is the growing incidence of fake information, leaving users uncertain of what content to trust, and leaving those with stolen identities exploited. 

Celebrities and creators alike are feeling the effects, including notables like Mr. Beast, Jennifer Aniston, Tom Hanks, and Taylor Swift, who have already fallen victim to misuse.

One creator whose likeness was used without her consent described the experience as “violating.” While on her honeymoon, Michel Janse discovered her likeness with a fake husband was being used to promote ED pills in an online advertisement. “It was me, in my clothes, in my bedroom, but I didn’t do an ad there,” she said. 

The ad had pulled visuals from a post she made more than a year ago sharing heartfelt emotions about her divorce. “It was by far the most personal thing I’ve ever shared,” Janse recalled. Adding insult to injury was a link in the ad directing to content Janse referred to as
“basically pornographic.”

Rahul Titus of Ogilvy, a British marketing and PR firm, said this is a major issue for celebrities right now “because anybody can take the likeness of somebody and use it. With the way social  media works and the spread of fake news, the damage potential is endless.”

Some preliminary laws exist at state levels to restrain deepfakes, as in California and Illinois. The FCC and FTC have taken moves to ban some deep fake use, while platforms like Google, Meta, and TikTok have announced guidelines to label and hopefully limit AI deep fakes.

Lawmakers have recently introduced federal legislation ahead of this year’s elections, but no federal regulation has yet been passed. Basically the only recourse at this time for those whose likeness is hijacked is to sue the company or individual perpetrator, according to Wasim Khaled, CEO and co-founder of Blackbird.AI, a narrative and risk intelligence service.

Some companies are hiring verification services to monitor social media and identify unauthorized AI use. To protect those on the consumer end, Titus suggested requiring brands to “declare and disclose” all use of AI content, though he acknowledged putting such a requirement in place is not likely to happen quickly.

“As [AI] takes hold of influencer marketing and creative industries,” said Titus, “you’ve got two options: you either embrace it or you fight it.” How those options play out by brands, creators, celebrities, and consumers will no doubt heavily shape the future of marketing and messaging.

Read more here.

Visit Marketing and Advertising Company Syndicate Strategies to Supercharge Your Sales
www.syndstrat.com