The failure of these predictions was not accidental. It reflected fundamental flaws in the methods used to forecast the future of work. Neither the 2013 study nor its 2023 successor based their projections on empirical investigations of real workplaces, workers, or production processes. Instead, both relied on the subjective judgments of computer scientists and economists, who were asked to guess whether certain tasks could, in principle, be performed by machines. If enough tasks associated with a job were deemed automatable — typically more than 50 percent — the entire occupation was classified as at risk of disappearance. No consideration was given to how jobs are structured in practice, how tasks are bundled together, or how economic and social factors mediate the adoption of new technologies. The result was a deeply mechanistic model of technological change, in which machines would displace workers whenever technically feasible, regardless of cost, institutional barriers, or political resistance. It was a model blind to the complex ways in which work is organised, contested, and transformed — and thus singularly ill-equipped to predict the actual course of economic development.
[...]
The fundamental limitations of generative AI have become increasingly apparent, even to many of its early champions. As François Chollet and other AI researchers have pointed out, deep neural networks suffer from inherent brittleness: they struggle to generalise beyond their training data, fail at basic reasoning tasks, and remain unreliable for applications requiring consistency or accuracy. Despite hoovering up enormous quantities of digital information, these models confront hard limits in their capacity to learn or adapt. Part of the problem lies in the very architecture of artificial neural networks, which were designed on the basis of mid-twentieth-century behaviourist psychology. Unlike organic brains — which come equipped with rich internal structures, innate goals, and representational frameworks — artificial networks are largely unstructured, relying on brute statistical association. The human mind can grasp a new concept from a few examples; machine learning models typically require millions. Hybrid approaches that integrate symbolic reasoning with deep learning, such as AlphaFold2’s success in protein folding, offer more promising results — but they must be painstakingly designed for specific tasks. There is no general method for building modular, structured cognition into deep learning systems, and no guarantee that one will be found.
[...]
The lessons of the past decade should temper both our hopes and our fears. The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process. Legal frameworks, collective bargaining, public investment, and democratic regulation all play decisive roles in shaping how technologies are developed and deployed, and to what ends.
The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many. Yet it does not have to be this way. The future remains open, contingent on whether we are willing to confront, contest, and redirect the pathways along which technology advances.
Is the AI Bubble About to Burst?
Vous êtes ici
Mardi 10 juin 2025