Weekly writing about how technology and people intersect. By day, I’m building Daybreak to partner with early-stage founders. By night, I’m writing Digital Native about market trends and startup opportunities.
If you haven’t subscribed, join 60,000 weekly readers by subscribing here:
The "Egg Theory" of AI Agents
When I think of AI agents, I think of the “egg theory” in consumer psychology.
When instant cake mixes came out, they sold poorly. Making a cake was too quick and simple. People felt guilty about not contributing to the baking.
So companies started requiring you to add an egg, which made people feel like they contributed. Sales soared. It turns out: there’s such a thing as too easy—and not just with eggs.
A cousin of the egg theory is the IKEA effect, a cognitive bias that helps explain why people place higher value on things they helped to build or create.
The egg theory and the IKEA effect can teach us a lot about AI agents and which products will prevail. In most use cases, AI products shouldn’t totally remove the human from the loop; people like control, or at least the illusion of control.
Let’s take two examples, one consumer and one enterprise.
Say I’m booking a trip. Today, I’d do that through an online interface—Booking.com, Expedia, Airbnb, Google Flights. I still do the legwork. Soon, though, I might book a trip via chatbot. I might say to an AI agent, “Book me a flight to Paris on July 3rd.”
It would be uncomfortable to remove me entirely from the workflow. I might feel lazy, guilty, even nervous about it. Did the bot book the right flight? Does it know I prefer morning flights, hate red eyes, and am a loyal Delta SkyMiles member? A better path would be the chatbot saying, “Sure Rex, here are three options for you to choose from.” I feel in-the-loop, in control. The agent does the grunt work—the thankless stuff—while tactfully including me in select moments.
Or take an example in enterprise. Say I run a small business and I’ve asked an agent to pay all the May invoices. The agent can probably do a better job than I can; after all, it has more data and there’s no room for human error. But the best interface probably includes approvals by me before payments are released; this gives me comfort, and makes me feel like I didn’t completely neglect an important part of my business.
There’s a difference between an AI agent and an AI copilot. Language matters. The latter implies more of human augmentation, rather than human obfuscation. I imagine copilots and augmentation will be more palatable to people. (This will be especially true as we adjust to the reality of software doing work for us.)
Of course, human involvement and approvals will be especially important in high stakes professions—medicine, law, and so on. But I suspect that human involvement will be key in nearly all workflows, regardless of the stakes. The question for each product is figuring out what their proverbial egg is 🍳
For some, the prompt might be enough of an egg. Using Midjourney, I feel that my prompt-writing is a sufficient contribution. And Midjourney gives me another moment of human involvement, producing four designs for me to choose from.
As the human, I make both the first decision (the prompt) and the final decision (which image we go with) in the creative process. The messy middle—the time-intensive part—is what’s removed.
In some cases, human involvement might actually be moot. Going back to the travel example, the bot may ask, “Would you rather sit in a middle seat, or a window seat for $30 more?” The AI is already trained on my data; it knows I’ll choose the window seat, even though it costs more. But simply making that decision on my behalf, without my involvement, could produce a worse user experience. Injecting a bit of friction helps keep the user happy, improving NPS and retention.
Technology is an adjustment; AI will take some getting used to. There’s a reason self-driving cars still have steering wheels: removing the steering wheel would probably make us freak out. Too much change, too fast. The best companies will be savvy in how they embed human decision-making into workflows, rather than removing the need for human input altogether. They’ll also inject moments of familiarity to balance out the discomfort of the new.
Winning products might not be those that show off the full extent of current technology, rendering humans obsolete. Those products will be fancy, but they’ll ultimately fail by missing a key point of user psychology.
Winning products, rather, will be those that offer a bridge from the world of human work to the world of software work, making us feel comfortable and in control along the ride.
Shorter piece this week for the shorter week. Back next week with a deep-dive into an overlooked segment of the startup world.
Until then 👋
Related Digital Native Pieces:
Thanks for reading! Subscribe here to receive Digital Native in your inbox each week: