AI can turn some impressive party tricks, but it's unsuited for solving serious problems in the real world. This is true of predictive AI, whose correlations are data-driven conspiracy theories, and of large language models like ChatGPT, whose plausible waffle is always trying to pull free of the facts. The real issue is not only that AI doesn't work as advertised, but the impact it will have before this becomes painfully obvious to everyone. AI is being used as form of 'shock doctrine', where the sense of urgency generated by an allegedly world-transforming technology is used as an opportunity to transform social systems without democratic debate.
[...]
Real AI isn't sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form. Like Thatcher herself, real world AI boosts bureaucratic cruelty towards the most vulnerable. Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn't provide insights as it's just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences. The logics of ranking and superiority are buried deep in the make up of artificial intelligence; married to populist politics, it becomes another vector for deciding who is disposable.
IA
Vous êtes ici
I have — since March — expressed great dismay about the credulousness of the media in their acceptance of the "inevitable" ways in which generative AI will change society, despite a lack of any truly meaningful product that might justify an environmentally-destructive industry led by a company that burns more than $5 billion a year and big tech firms spending $200 billion on data centers for products that people don't want.
The reason I'm repeating myself is that it's important to note how obvious the problems with generative AI have been, and for how long.
Dans les lieux de travail, l’IA apparaît souvent de manière anodine, en étant peu à peu intégrée à des applications de travail existantes. Dans la pratique, l’automatisation remplace rarement les travailleurs, elle automatise très partiellement certaines tâches spécifiques et surtout reconfigure la façon dont les humains travaillent aux côtés des machines. Les résultats de l’IA générative nécessitent souvent beaucoup de retravail pour être exploitées. Des rédacteurs sont désormais embauchés pour réhumaniser les textes synthétiques, mais en étant moins payé que s’ils l’avaient écrit par eux-même sous prétexte qu’ils apportent moins de valeur. Les chatbots ressemblent de plus en plus aux véhicules autonomes, avec leurs centres de commandes à distance où des humains peuvent reprendre les commandes si nécessaire, et invisibilisent les effectifs pléthoriques qui leur apprennent à parler et corrigent leurs discours. La dévalorisation des humains derrière l’IA occultent bien souvent l’étendue des collaborations nécessaires à leur bon fonctionnement.
Dans [son] livre, le sociologue [Antonio Casilli] démontre combien ce travail invisible, qui est lié aux plateformes et aux datas, reproduit une dichotomie nord-sud. Le travail du clic est un job précaire réalisé par des millions de personnes et que l’on retrouve partout sur la planète. Parfois, ce sont des freelances aux États-Unis, des Philippins dans un cybercafé ou des milliers de travailleurs africains dans des fermes du clic. Il y a néanmoins des différences très importantes dans les conditions de travail. Tout en bas de l’échelle, on retrouve les travailleurs africains. Finalement, c’est une nouvelle dimension néocoloniale, loin de l’image épurée de la Silicon Valley. Le fait que les Big Tech invisibilisent ce travail sous-entend que celui-ci n’est pas très important – pourtant, il est essentiel. L’un des travailleurs que j’ai rencontré au Kenya m’a dit : « Il n’y a pas d’intelligence artificielle, seulement l’intelligence africaine. » Ça résume assez bien les choses.
Yes, there is an economic incentive to the election of Trump by those at the commanding heights of Silicon Valley power. But to reduce it purely to materialism is missing a strong part of the story, namely about AI's own operation as an ideological apparatus which encourages power consolidation.
Artificial intelligence is a threat to educational institutions – as deeply flawed as these may be – not because it's some incredible technological achievement that's more powerful that the human mind (hahahahahahahahahahahahahahaha); but because it is the manifestation of a series of reactionary political beliefs. AI is inextricably bound up in ideologies and practices that seek to undermine unions, exploit labor, re-inscribe racial and economic hierarchies, and centralize control – of knowledge and knowing ("intelligence" in all its various military and eugenicist histories) – in the hands of a few giant technology corporations. (Folks, that's fascism.)
In a recent earnings call Sundar Pichai claimed that at Google now 25% of Code is AI generated (“and then reviewed and accepted by engineers”). In the AI boosterism parts of the web (so basically X and LinkedIn) this number was celebrated: Even Google does AI code generation. So if your whole startup is just ChatGPT in a trenchcoat, you’re basically at the industry standard, right?
Let’s not be cynical here and point at Google’s not exactly stellar recent track record when it comes to great products and software, but let’s ask us where that number comes from and what it means.
I went to the UX Brighton conference yesterday.
The quality of the presentations was really good this year, probably the best yet. Usually there are one or two stand-out speakers (like Tom Kerwin last year), but this year, the standard felt very high to me.
But…
The theme of the conference was UX and “AI”, and I’ve never been more disappointed by what wasn’t said at a conference.
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. [...]
If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.
But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they'd still be a terrible technology for information access.
“we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching—so fragile, in fact, that changing names can alter results by ~10%!”