In a world of companies burning money and resources at a breathtaking rate, Nilay Patel’s essay on the state of AI offers a refreshing level of clarity.
The next time someone asks me what I think about AI, I will send this video with a note that I agree with all of it.
AI is the most complex thing to happen to the technology industry, and Patel nails many of the reasons why.
Here is a bit of his argument, after he outlines just how unpopular AI has become in the real world:
I also think it’s incredibly important for our politicians and tech executives to make sure our political process makes people feel empowered, not helpless, which is a specific kind of nihilism they have all greatly contributed to. The violence is a result of that helplessness and nihilism, and the most powerful people in our society ought to reckon with that, especially as they run around saying AI will wipe out all the jobs. I’m not even exaggerating about that — here’s Anthropic CEO Dario Amodei saying he thinks AI will wipe out all the jobs:
Dario Amodei: Entry-level jobs in areas like finance, consulting, tech and many other areas like that —- entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems. We may indeed — it’s hard to predict the future — but we may indeed have a serious employment crisis on our hands as the pipeline for this early-stage, white-collar work starts to contract and dry up.
What I see when I encounter clips like this is the true gap between the tech industry and regular people when it comes to AI — the limit of software brain. Like I said, everyone in tech understands how much regular people dislike AI. What I think they’re missing is why. They think this is a marketing problem. OpenAI just spent $200 million on the TBPN podcast because the company thinks it will help make people like AI more. Sam Altman has said so explicitly:
Sam Altman: Oh, they are genius marketers and I would love to have better marketing. Somebody said to me recently that if AI were a political candidate, it would be the least popular political candidate in history. And given the amazing things AI can do, I think there’s got to be better marketing for AI.
It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day! ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds.
You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.
As long as Dario Amodei, Sam Altman, and their peers are dressed up as pilots, I’m not sure I want to be on the plane. Nihilism without a parachute doesn’t sit well with me.
John Gruber, in his link to the video:
Something is profoundly off in the computer industry when it comes to software broadly and AI specifically. It’s up for debate what exactly is off and what should be done about it, but the undeniable proof that something is profoundly off is the deep unpopularity surrounding everything related to AI. You can’t argue that the public always turns against groundbreaking technology. The last two epoch-defining shifts in technology were the smartphone in the 2000s, and the Internet/web in the 1990s. Neither of those moments generated this sort of mainstream popular backlash. I’d say in both of those cases, regular people were optimistically curious. The single most distinctive thing about “AI” today is the vociferous public opposition to it and deeply pessimistic expectations about what it’s going to do.
The comparison to the 90s is a good one. We still had websites after the dot-com bubble, and we will have AI tools after this bubble bursts. John is right though; I don’t think many people were opposed to online shopping in a way some are opposed to the rise of LLMs.
From a financial standpoint, thinking that the 2020s are just the 1990s on repeat is short-sighted; the horrifying deals between AI companies and the likes of Nvidia and Coreweave make the late 1990s look like child’s play.
The truth is simple: our economic and social moment is in the hands of people who do not understand the power they wield. They write handwringing essays about the dangers of new models with one hand, while cashing checks with the other.
Many people believe that AI is inevitable. “Get onboard or get left behind” is the tone that people and companies are taking more every day. In their worldview, to be concerned about AI is to be missing the most important change we’ve seen in technology (possibly) ever. Expressing worry is considered naive and against progress. The desire to slow down isn’t understood by some of these folks.
Look, I’m not dumb enough to believe the genie can be put back in the bottle, but I’m also smart enough to know that we have no idea what we’re doing.
Waiting and hoping for government regulation to save jobs, limit environmental damage, and rein in the mass data collection required to feed LLMs is not a plan. Elected officials are not equipped to move quickly enough to keep up; industry leaders are incentivized to push harder into the unknown.
The two may never meet in time.
The dangers of AI are both overwhelmingly large and heartbreakingly personal.
Mass layoffs and environmental concerns feel too big to wrap our arms around. Reading stories about people who have harmed themselves (and others) after spending time with LLM-powered chatbots feels too brutal to fully understand.
Turning the world into software inevitably includes these tradeoffs, as Nilay Patel continues:
I’ve reviewed a lot of tech products over the past decade and a half, and all I can tell you is that it is a failure when you ask people to adapt to computers. Computers should adapt to people. Asking people to make themselves more legible to software — to turn themselves into a database — is a doomed idea.
It’s an ask so big that I can’t imagine a reward that would make it worth it for anyone, even if the tech industry wasn’t constantly talking about how AI will eliminate all the jobs, require a wholesale rethinking of the social contract and — oops — also the latest models might cause catastrophic cybersecurity problems that might lead to the end of the world.
Does this sound like a good deal to you? Can you market your way out of this? This only makes sense if you have software brain — if your operative framework is to flatten everything into databases that you can control with structured language. The people paying thousands of dollars a month to set up swarms of OpenClaw agents and write thousands of lines of code are people who look at the world and see opportunities for automation, to repeat tasks, to collect data. To build software. AI is great for them. It’s even exciting in ways that I think are important and will probably change our relationship to computers forever.
For everyone else, AI is just a demanding slop monster. It’s a threat. I’m not saying regular people don’t use Excel or Airtable to plan their weddings or have fun throwing PowerPoint parties, or even that AI won’t be useful to regular people over time. I think a lot of people enjoy data and tracking different parts of their lives. I’m wearing a Whoop band as I write this. I’m just saying these things aren’t everything. Not everything about our lives can be measured and automated and optimized, and it shouldn’t be.
In the tidal wave of cash and influence that is currently swelling, logic has been washed away. If my company were burning billions of dollars a year on increasingly unpopular products, I would have lost my job many times over.
Instead, the Silicon Valley rich and powerful keep getting richer and more powerful, at the expense of their users and the planet. AI is capable of incredible things, but it is ushering in terrible things at the same time. To ignore that is both naive and foolish.