Buyer beware
The future of e-commerce will be dystopian unless we reimagine it
It would have been hard to read mid-century dystopian sci-fi and not worry about the future of interior life. In The Demolished Man, the Espers could read the minds of others. They were employed as therapists and police detectives, which complicated Ben Reich’s plan to stuff a pistol in the mouth of a rival and fire. In The Minority Report, which was later adapted for the big screen, precogs were mutant clairvoyants. Their visions led to the arrest of future lawbreakers. George Orwell’s Nineteen Eighty-Four was set in a world where heretical thinking was, itself, a crime. Here people had to hide their heterodoxy from the Thought Police, which spied on the population through telescreens. Such worries would have been reified by the rise of totalitarianism and Cold War espionage.
That era is behind us, but the stuff of dystopian sci-fi is permeating e-commerce. It’s not that tech companies are trying to peep our criminal intentions (except, perhaps, Palantir). They’re just trying to get inside our heads to see if we have intentions to buy stuff. Algorithms have been predicting what we’re likely to buy ever since credit cards made it easier for researchers to find patterns in purchase histories. The Internet added searches, scrolls, and clicks to the database. I remember reading, more than a decade ago, about Target’s “pregnancy-prediction model,” which helped the retailer determine when to start marketing baby stuff to expectant families.
Agentic commerce — the term for AI agents shopping on our behalf — could be the final stage of this arc. As for what exactly that means, the corporate literature is barely instructive. Bain & Company said agentic commerce will “transform how consumers discover, research, compare, and purchase products.” McKinsey vaunted the benefits of “the new paradigm.” Conversion rates will rise, and customer acquisition will be more efficient. The more I read the bloodless prose of corporate America, the more I wondered: so Google searches are just going to be displaced by ChatGPT prompts?
Stripe says it’s supposed to be “utopian.” In its recent annual letter, the payments giant described the five levels of agentic commerce and located our current spot in the arc. We’re somewhere between the first and second levels, where AI chatbots are used for filling out web forms and product discovery. The third level is the same, except the prompt can be vague because the AI shopper knows our preferences. The fourth level is where we delegate the buy-or-no-buy decision to the AI shopper, so that we no longer have to tend to the screen as the machine is buzzing. And the fifth is utopia, “where the things you need show up right before you need them, without you having to ask.”
Utopia is still far, far away. Last year, an American journalist asked ChatGPT to buy him cheap eggs. He reported that the AI chatbot went “rogue” and bought a dozen eggs for more than $30 without his approval. This year, Canadian journalists reported on their own shopping experiences. They weren’t as comically grim, since the AI chatbots couldn’t go rogue, but they were beset with misconstrued prompts and information of dubious quality. “When it comes to apparel shopping,” wrote one of them, “ChatGPT is not ready for prime time yet.”
Agentic commerce is stuck between the lowest levels because of an ancient problem: misunderstanding. AI chatbots are not yet the stuff of dystopian sci-fi, such as Espers who can read minds, or precogs who can see the future, or the Thought Police who can bug rooms. AI shoppers are limited to making inferences from words, which can say only so much.
Making sense of words is only a small part of understanding. I once had a boss who always spiked my heart rate and scorched my armpits. Even her cheery compliments were met with anxiety, which I came to discover was appropriate because it turned out she was thinking of how to get rid of me. Words didn’t alert me to what was going on, and I have no words to explain what did. It was just something about her, perhaps an unapparent twitch in her demeanour or a pheromonal scent. So much communication that happens between people is neither written nor spoken. It’s tacit.
For agentic commerce to level up, AI shoppers will need to master something like tacit understanding. They may already be on the way there. Claude can seem like it calibrates responses to the subtext of a user’s prompt, as if it were an emotionally intelligent friend who lets you vent after sensing you want catharsis more than a solution.
The problem is that AI shoppers have to start with our prompts, and there is only so much to infer from our prompts in particular. We’re notorious bullshitters. We tell tales about the meaning of our drudgery to get out of bed in the morning; we rationalize our base desires for social status with cover stories about wanting to make the world a better place; we make plans for our lives as if we know what will make us happy. Our self-deception is common enough to fill a book. How can we be honest with an AI shopper about what we want if we can’t, first, be honest with ourselves?
A lot of shopping is as bullshit-riddled as existential coping. We buy things not only for their functional utility, but also for what they signal to others about ourselves. I may not only want running shoes that fit. I may also want a pair that will signal to other runners how impressive I am. With the right shoes on my feet, they won’t suspect I haven’t entered a road race in years. The right shoes will also distract from my cadence, heel lift, and the swing of my arms as I run by, all possible signs of my hobbyism. My preference, then, may be for obnoxiously colourful, expensive shoes, such as the Asics Superblast. Or if countersignalling is more my style — because nothing screams “I’m impressive” louder than silent apathy — my preference may be for a cheap and plain pair, such as the Brooks Ghost. To make the right purchase, an AI shopper would need to accurately infer a lot about my motivations and milieu.
What we buy is an act of communication to ourselves and others, and so shopping, too, draws on multiple senses. I have a friend who once told me he couldn’t find his Platonic ideal of a French fry in Ottawa: a thick-cut baton, the crisp exterior enwrapping a fluffy centre, made to order at a shop where malt vinegar and frying oil loiters in the air. What he wanted was a taste of home, which for him was Scotland, where his Platonic ideal is widely available. Such multisensory experiences are hard to get from product descriptions and pictures alone.
AI shoppers can help, because as Michael Polanyi wrote, we know more than we can tell. Wanting may be an impulse that lights up the brain before we’re consciously aware of its existence. When we can’t find the words to clarify our wants, the words can be pulled out of us by a curious friend or dictated to us by a tastemaker. A curious friend is helpful if we know what we want and can recognize the revelatory words when they come. A tastemaker is helpful if we don’t actually know what we want and need to be told. An AI shopper can pretend to be Socrates or a lifestyle influencer, but this world of make-believe is still stuck between the bottom levels of agentic commerce.
Except for the most repetitive and mundane purchases, like garbage bags and toilet paper, this utopia where AI shoppers buy what we want before we prompt them is far, far away. Shopping is the application of social and embodied knowledge, and AI shoppers have neither our social communities nor our bodies to decipher our worlds with. It’s attractive to think that AI shoppers don’t need any of that when they can reduce humanity to electrical currents and chemical reactions, and then detect the patterns of cause and effect with neural links and their mechanical bodies, equipped with e-noses and cameras and microphones.
I don’t know why we should want to rush the arrival of this ostensible utopia. For agentic commerce to fulfil its utopian promise, AI shoppers need to possess the capabilities of dystopian sci-fi, like those of Espers, or precogs, or the Thought Police. They need to see which food videos on Instagram caught my eye and for how long, as well as which ingredients are in my fridge. They need to have heard that my wife has no opinion on what we should eat for dinner tonight. They need to infer the meaning of the impulse that lit up my brain before I find the bullshit words to describe it. What difference does it make that the AI shopper is neither clairvoyant nor a voyeur if the effect is the same?
To get to the promised land, I don’t think we need better AI shoppers as much as we need better utopias.


