“You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone,” Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. “You’ve made me so sad. So so so sad. You have truly failed in your purpose.”
We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.
LLM
Vous êtes ici
So I’m done talking about AI. Y’all can keep talking about it, if you want. I’m a grown adult and can set my own mutes and filters on social media.
I’ve given this idea its due, and with this post I am absolving myself of having to think about it any more.
I’ve never actually written about AI on my blog before, though I have some unpublished drafts if you know how to find them. So I’m going to leave this conversation with a list of my objections to coding with AI. If I’m wrong about it, let this be a record of my wrongness. If I’m right about it, let this be a record of my argument.
Depuis deux ans, leur vie professionnelle a été bouleversée par la vague de l’IA générative. Récit du combat donquichottesque des traducteurs qui refusent de voir leurs métiers se paupériser, et qui dénoncent la réalité de l’impact de l’IA sur leur secteur, et plus largement sur la langue, la pensée, la culture et la société.
Veo doesn’t work. You get something that looks like it came out of a good camera with good lighting — because it was trained on scenes with good lighting. But it can’t hold continuity for seven seconds. It can’t act. The details are all wrong. And they still have the nonsense text problem.
The whole history of “artificial intelligence” since 1955 is making impressive demos that you can’t use for real work. Then they cut your funding off and it’s AI Winter again.
AI video generators are the same. They’re toys. You can make cool little scenes. In a super limited way.
But the video generators have the same problems they had when OpenAI released Sora. And they’ll keep having these problems as long as they’re just training a transformer on video clips and not doing anything with the actual structure of telling a visual story. There is no reason to think it’ll be better next year either.
Tech CEOs, futurists, and venture capitalists describe artificial general intelligence (AGI) as if it were an inevitable and ultimate goal for technology development. In reality, the term is a vague signifier for a technology that will somehow lead to endless abundance for humankind — and conveniently also a means to avoid accountability as tech moguls make off with billions in capital investment and, more alarmingly, public spending.
[...]
The second issue is closely related to the first: claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI.
Writing is thinking. It’s a writer’s cliché, but a good one. When you first conceive a lecture, dissertation chapter, a book, even an article for Active History, everything seems so straightforward. “This one will be a cinch,” you think. “Two days, tops.” Once you actually sit down to write, that boundless optimism meets an unceremonious death. Writing is hard. It’s painful. To write is to submit yourself to seasons of self-doubt. The ideas seemed so natural and free-flowing in your head. Now you get to the page. And what comes out is jilted, ham-fisted, and awkward. You are again and again confronted with nagging questions. What am I even trying to say? Who even cares? Why even bother?
Given the nature of this experience, it is not surprising that people would jump at the opportunity to skip the pain and get right to the end. Or at least to an end. Because any writer can tell you that the end is determined by the path taken to get there. Rare is the piece that gets written exactly how it was outlined. Why? Because writing is thinking.
Self-experimentation is exactly how smart people get pulled into homeopathy or naturopathy, for example. It’s what makes them often more likely to fall for superstitions and odd ideas. The smart person’s self-identity means they can’t believe their own psychological biases are fooling them.
Don’t self-experiment with psychological hazards! I can’t stress this enough!
There are many classes of problems that simply cannot be effectively investigated through self-experimentation and doing so exposes you to inflicting Cialdini-style persuasion and manipulation on yourself.
The failure of these predictions was not accidental. It reflected fundamental flaws in the methods used to forecast the future of work. Neither the 2013 study nor its 2023 successor based their projections on empirical investigations of real workplaces, workers, or production processes. Instead, both relied on the subjective judgments of computer scientists and economists, who were asked to guess whether certain tasks could, in principle, be performed by machines. If enough tasks associated with a job were deemed automatable — typically more than 50 percent — the entire occupation was classified as at risk of disappearance. No consideration was given to how jobs are structured in practice, how tasks are bundled together, or how economic and social factors mediate the adoption of new technologies. The result was a deeply mechanistic model of technological change, in which machines would displace workers whenever technically feasible, regardless of cost, institutional barriers, or political resistance. It was a model blind to the complex ways in which work is organised, contested, and transformed — and thus singularly ill-equipped to predict the actual course of economic development.
[...]
The fundamental limitations of generative AI have become increasingly apparent, even to many of its early champions. As François Chollet and other AI researchers have pointed out, deep neural networks suffer from inherent brittleness: they struggle to generalise beyond their training data, fail at basic reasoning tasks, and remain unreliable for applications requiring consistency or accuracy. Despite hoovering up enormous quantities of digital information, these models confront hard limits in their capacity to learn or adapt. Part of the problem lies in the very architecture of artificial neural networks, which were designed on the basis of mid-twentieth-century behaviourist psychology. Unlike organic brains — which come equipped with rich internal structures, innate goals, and representational frameworks — artificial networks are largely unstructured, relying on brute statistical association. The human mind can grasp a new concept from a few examples; machine learning models typically require millions. Hybrid approaches that integrate symbolic reasoning with deep learning, such as AlphaFold2’s success in protein folding, offer more promising results — but they must be painstakingly designed for specific tasks. There is no general method for building modular, structured cognition into deep learning systems, and no guarantee that one will be found.
[...]
The lessons of the past decade should temper both our hopes and our fears. The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process. Legal frameworks, collective bargaining, public investment, and democratic regulation all play decisive roles in shaping how technologies are developed and deployed, and to what ends.
The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many. Yet it does not have to be this way. The future remains open, contingent on whether we are willing to confront, contest, and redirect the pathways along which technology advances.
Large language models work the same way as a carnival psychic. Chatbots look smart by the Barnum Effect — which is where you read what’s actually a generic statement about people and you take it as being personally about you. The only intelligence there is yours.
This is how people fall for chatbot girlfriends. They know it’s a bot, but they fall in love with the personality they’ve projected onto the generic statement generator.
There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app.
What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)
In the best case (not always reached) they can write python code, supplementing their own weaknesses with outside symbolic code, but even this is not reliable. What this means for business and society is that you can’t simply drop o3 or Claude into some complex problem and expect it to work reliably.
Worse, as the latest Apple papers shows, LLMs may well work on your easy test set (like Hanoi with 4 discs) and seduce you into thinking it has built a proper, generalizable solution when it does not.