Barbie and the AI-Generated Internet
How AI Bleeds Into Advertising, UGC, and Professional Likeness
Weekly writing about how technology shapes humanity, and vice versa. If you haven’t subscribed, join 50,000 weekly readers by subscribing here:
Hey Everyone 👋 ,
Apologies for missing last week—after three years of dodging COVID, it finally caught me.
This week’s piece is built around a statistic I read recently: an expert estimated that by 2025 to 2030, 99% to 99.9% of the internet’s content will be AI-generated. Let’s unpack what that means and what the ripple effects might be.
But first, let’s talk Barbie.
Barbie and the AI-Generated Internet
The upcoming Barbie film’s marketing campaign is one of the more ingenious (and one of the more inescapable) in recent memory. Barbie advertising is everywhere.
The Greta Gerwig-helmed film—which stars Margot Robbie as Barbie and Ryan Gosling as Ken—has over 100 (!) brand collaborations in market, ranging from Crocs to Pinkberry, Burger King to Xbox. Mattel, the brand’s parent company, reportedly receives either a flat fee or 5-15% of sales for each collaboration.
The crossovers seem to be working: according to MuckRack data, half a million articles have been written about Barbie since January—impressive earned media value. June alone brought 86,000 articles.
The latest box office tracking also looks promising: Barbie is tracking for a $100M opening weekend gross this weekend, compared to $45M for Christopher Nolan’s Oppenheimer, which also opens Friday. A couple months ago, before the marketing push, Barbie was only tracking around ~$40M.
My favorite collaboration, of course, is Barbie’s collab with Airbnb: a real-life Barbie DreamHouse in Malibu that anyone can book on Airbnb. Brilliant 👏
The Barbie campaign was designed for virality. This started a few months back with the website barbieselfie.ai, which allowed anyone to create their own version of the movie’s poster.
The Barbie poster phenomenon was immediately meme-ified across the internet. Nearly every recent pop culture moment got the Barbie poster treatment:
The campaign was genius in its inherent virality—it was accessible, simple, easy-to-understand. Most importantly, it could be remixed endlessly. I wrote about this campaign in April’s Viral Growth: How to Keep Lightning in the Bottle.
Watching Barbie succeed at breaking through the noise of the internet, again and again, reminded me how tall a task that is in 2023. There’s a glut of advertising content online; digital advertising is a ~$670B market and now accounts for over 66% of the total global ads market. Attention is finite, and we’re inundated with an ever-expanding sea of content. The average person sees 4,000 to 10,000 ads a day (!).
And we’re about to see a lot more ad content online, powered by AI.
AI + Advertising
Generative AI is eating technology. And generative AI is about to eat advertising.
Bloomberg expects generative AI to swell to a $1.3 trillion market and to reach 12% of total tech spend within 10 years.
The AI-assisted digital ads business, meanwhile, is expected to draw $192B annually by 2032.
Digital advertising is a massive segment of technology, so this makes sense. But what does it look like in practice? We’re already seeing early-mover startups change how advertising is done.
Treat, for instance, generates creative for CPG brands. The goal is to find the best product images that drive conversion. Without AI, this can be cumbersome, time-intensive, and expensive. But using generative AI, brands can generate multiple options in seconds; they can then test each one, seeing which performs best.
All of these product images, for instance, were generated with AI using Treat:
For its part, Shopify recently introduced Shopify Magic, which generates product descriptions for brands using AI. You type in a few descriptors, and the tool spits out a nicely-written paragraph for the product detail page.
Marketers want flexibility to quickly test and iterate, doubling down on what drives higher clickthrough rates, average order values, and return on ad spend. AI lubricates the entire process. This is especially important in a world of deteriorating direct response in the wake of Apple’s ATT privacy changes.
2023 has a been a big year for brand refreshes. Major brands like Burger King, Pepsi, and M&Ms have all revamped their logos (interestingly, Burger King research showed that when asked to draw the Burger King logo, most Gen Zs actually drew the logo from the 90s; this led to a return to that look):
The internet has long made fun of how expensive these brand refreshes are. In 2009, a leaked PDF gave a glimpse into Pepsi’s logo design process. The document, which runs 27 pages, was created by New York-based brand consultancy agency Arnell Group and later leaked to Reddit. The report starts simple enough, talking about the golden ratio, but gets progressively weirder. Executives begin comparing Pepsi’s logo to a series of deranged smiles:
Before you know it, we’re talking about geodynamics and magnetic fields 🤨
The internet had a field day with this document—especially after it came out that Pepsi paid a whopping $1M for it.
This is what makes generative AI so interesting and so disruptive. Give me a few hours and a text-to-image model, and I’d wager I can come up with some brand refreshes for Pepsi or Burger King that rival the $1M agency designs. Will brand agencies disappear? Doubtful, though the agency model may be severely atrophied. Generative AI should dramatically reduce costs for creative work—and while it may put some jobs at risk, it should also offer an important tool for creatives to produce better and faster work.
There’s a lot of advertising content out there; brands have to try a lot of things to see what works. Not every brand has Barbie’s small army of marketers behind it, or Mattel’s and Warner Bros’s financing. AI offers a powerful new arrow in the advertiser’s quiver, which will disproportionately benefit small brands. The internet was revolutionary in allowing for targeted advertising, again to the favor of smaller brands; what happens now when every individual gets a specifically-generated ad designed by AI just for their tastes and preferences?
Maybe a brand learns that you prefer car commercials that take place in summer, while your spouse prefers fall foliage; maybe a brand learns to generate ice cream ads that show you vanilla, while showing someone else chocolate. Or maybe Barbie 2’s marketing can discern that you prefer ads with Ken (you’ve always had a thing for Ryan Gosling, after all) while your younger cousins prefers seeing Barbie herself. In milliseconds, brands will be able to generate ads targeted to your preferences (pending many of the privacy changes in advertising), reaching a new level of specificity and, consequently, conversion.
I see the intersection of advertising and AI as one of the most interesting areas to watch in the new AI epoch. Advertising continues to power much of the internet, and as content shifts to being generative, the ad market will shift in tandem. By 2030, I expect that a good portion of the product images, banner ads, and even YouTube pre-roll ads we see are generated by AI.
AI + UGC
Another area that interests me is the intersection of AI and user-generated content. A few years ago, I created this visual:
While creating content on YouTube required expensive equipment and specialized knowledge, TikTok broadened access by building intuitive tools into its own app. It then supplemented those tools with CapCut, another Bytedance-owned app wholly focused on editing that has quietly become one of the most-downloaded apps in the world (last year it ranked 4th with 357M downloads, slotting between #3 WhatsApp at 424M and #5 Snapchat at 330M).
You could now refresh this visual with a third box that includes generative AI tools. Generative AI expands the arc of technology toward making creation more accessible. Early startups here include Runway (video editing), Alpaca (creative tooling, initially within Photoshop), Synthesia (personalized videos), and Eleven Labs (voice).
I read a line in The Verge recently that jumped out at me: “We spent the last two decades answering a question—what would happen if you put everyone on the planet into a room and let them all talk to each other?”
Maybe the next iteration is: “What would happen if you put everyone on the planet into a room and let them create anything?”
Instead of needing to know Lua to build on Roblox, maybe anyone can soon say, “Build an experience set in a retro pizza shop where anyone can order pizza or work behind the counter.” Boom—AI might rapidly generate an immersive 3D world where Roblox users can do just that.
I’ve written in the past about AI tools like Wand that give anyone professional artist skills. At a Demo Night in May, Wand founder Grant quickly sketched out a blue face with yellow hair.
Then, in seconds, Wand turned his sketch into a gorgeous rendering that looked like a professional piece of art.
AI clearly amplifies creativity. Most creators are excited about it; a study found that 86% of professional creators “say that AI positively impacts their creative process.” This is the right sentiment: AI gives us more tools in our arsenal of creativity, reducing costs and extending human ability. We’ve seen this with past innovations—from the paint brush and canvas to graphic design and photoshop.
User-generated content (UGC) has populated the last era of the web. We might be entering a new era with a new type of content: user-generated generative content (UGGC?).
There’s a lot of focus on new “Instagram disrupters” that are essentially photo-sharing apps with a twist. The last few years have brought many: Poparazzi, Dispo, BeReal, now Retro. Many are delightful product experiences, but I don’t see the next Instagram being built around photos. Typically, new social networks and content platforms are built around a new atomic unit of content. UGGC could be behind that atomic unit, ushering in a new wave of creativity and exploration.
AI + Professional
Taylor Swift’s re-recorded version of Speak Now came out a couple weeks back and, naturally, I listened with friends at midnight. During our listening session, one friend remarked, “This song sounds like an AI-generated Taylor Swift song.” (I hate to admit he wasn’t wrong.)
Swift has long been outspoken on how technology impacts artists. She wrote an open letter to Apple when Apple Music wasn’t paying artists during its free trial, causing the company to quickly change course; she pulled her catalog from Spotify in 2014 after asserting that artist economics on streaming were insufficient. And I expect Swift to soon be one of the first artists to issue a statement about AI-generated music. (I’d put money on a Notes app-style Instagram post in the next 12 months.)
AI songs are blowing up on TikTok—you can listen to Britney sing “Part of Your World” from The Little Mermaid, or The Beatles cover Harry Styles’s “Watermelon Sugar.” An entirely-new, surprisingly-plausible AI-generated Drake song recently went viral.
Is this the future of music? You can envision a user selecting options from “Artist” + “Genre” + “Mood” to generate a brand new song.
In last month’s A New Era in Technology: Applications of AI, VR, and AR, I wrote about how AI could name the emotions a song might evoke for the listener—for instance, Notion AI will tell you that Taylor Swift’s “Getaway Car” will elicit excitement and nostalgia. What about going in the other direction? I can see a future Spotify feature—or standalone app—that lets you generate songs based on these elements.
You could say, I’m in the mood for a sad indie song in Beyoncé’s voice—and voila.
A similar concept might extend to other media formats. What if instead of watching Barbie this week with Margot Robbie in the title role, you’d rather watch it with Charlize Theron? Or even with a young Marilyn Monroe? Or maybe you and your friends want to watch The Avengers starring…your friend group. AI might soon be able to power these experiences.
This has interesting repercussions for an artist’s name and likeness. I doubt artists like Swift will allow their voice to be used in AI-generated songs—unless they profit handsomely. Companies like Authentic Brands have built sizable businesses out of acquiring celebrity likeness rights—Muhammad Ali, Elvis Presley, Marilyn Monroe. Those likenesses might become more valuable in a generative AI-powered world. The estates of late artists like Michael Jackson and Whitney Houston might consider putting out new albums “sung” by those artists. Things are about to get…confusing.
There’s an episode in the new Black Mirror season that centers around actress Salma Hayek, who plays herself. In the episode, Hayek is upset because she had sold her likeness to a Netflix-like streaming service, and the service is using her likeness in inappropriate ways. Hayek simply licenses out her image and gets paid handsomely in return; AI creates the TV show using her likeness. If you squint, you can see this as the future of movie stardom—a far more scalable, economical way to create content and to squeeze money out of celebrity.
This seems far off, but it’s not. The ongoing actors strike in Hollywood positions AI as a key issue: actors are worried about studios using their likeness. In a statement about the strike, the Alliance of Motion Picture and Television Producers (AMPTP) said that its proposal included “a groundbreaking AI proposal that protects actors’ digital likenesses for SAG-AFTRA members.” The response from the Screen Actor Guild’s top negotiator didn’t mince words:
“This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So if you think that’s a groundbreaking proposal, I suggest you think again.”
Yikes.
It’s fascinating that these are questions we’re already grappling with in 2023. They’re only to get more nuanced and complex; that Black Mirror plotline might not be fiction for long.
Over the past decade, as more content has flooded the internet, IP has become more valuable. This is why Disney scooped up Pixar, Marvel, Lucasfilm, and Fox under Bob Iger’s first reign, and it’s why Warner Bros is financing a Barbie movie. It’s also why there are some seriously ridiculous bits of IP-fueled content out there: Eva Longoria recently directed a Cheetos drama called “Flamin’ Hot” and Jerry Seinfeld is at work on “Unfrosted: The Pop-Tart Story.” (Mattel also has movies in the works on Hot Wheels, Uno, Barney, Polly Pocket, Rock ‘Em Sock ‘Em Robots, and Magic 8 Ball. Brace yourself.)
Generative AI will fuel another explosion of content, meaning that any prior IP-related name recognition will become only more critical to breaking through a sea of noise. Expect many more IP-fueled movies and shows, and many more battles over image and likeness rights.
Final Thoughts: Additional Use Cases
The above are three examples of AI content that I find interesting—advertising content, UGC (and UGGC), and professional likenesses. But there will be many more examples. We’re seeing enterprise use cases like customer support (Ada is an early mover), legal (Harvey, for instance), and copywriting (Jasper is an example). Thomson Reuters just scooped up CaseText (legal tech) for $650M, marking one of the first major AI acquisitions, showing incumbents’ desire to acquire innovation during a fast-moving time.
The stat “99% of internet content will be AI-generated” may extend to work—in a few years, perhaps 99% of emails and meeting notes will be AI-generated.
I think often about something NVIDIA’s Jensen Huang said this past spring: “In the future,” he said, “every pixel will be generated and not rendered.” This is one of the defining shifts of the 2020s: a shift to ever-more generative content populating an ever-more-crowded web.
Related Digital Native Pieces
Thanks for reading! Subscribe here to receive Digital Native in your inbox each week: