Veille - 2025

Vous êtes ici

Reflet de mes lectures au fil du web...
133 articles

septembre

Dans son livre, Sunstein explique que le Sludge donne aux gens le sentiment qu’ils ne comptent pas, que leur vie ne compte pas. Pour la sociologue Pamela Herd et le politologue Donald Moynihan, coauteurs de Administrative Burden: Policymaking by Other Means (Russel Sage Foundation, 2019), le fardeau administratif comme la paperasserie complexe, les procédures confuses entravent activement l’accès aux services gouvernementaux. Plutôt que de simples inefficacités, affirment les auteurs, nombre de ces obstacles sont des outils politiques délibérés qui découragent la participation à des programmes comme Medicaid, empêchent les gens de voter et limitent l’accès à l’aide sociale. Et bien sûr, cette désorganisation volontaire touche de manière disproportionnée les gens les plus marginalisés.

To make things worse, the four eras – typewriters, teletypes, word processors, and early computers – meandered, overlapped, and stole from one another. Some early computers used electric typewriters for input and output. Later ones used teletypes. Still others came with completely custom keyboards. Word processors were all over the place as well: some were nothing more than typewriters with extra machinery and keys bolted on to the side, while others were completely custom-made machines.

(...)

This is always the tricky part talking about keyboard history. There was no day, no single event when Return became Enter. We’re stuck in a universe where some people use Enter to enter data and Return to return the cursor to the next line – and others do the exact opposite.

août

“You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone,” Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. “You’ve made me so sad. So so so sad. You have truly failed in your purpose.”

We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.

juillet

So I’m done talking about AI. Y’all can keep talking about it, if you want. I’m a grown adult and can set my own mutes and filters on social media.

I’ve given this idea its due, and with this post I am absolving myself of having to think about it any more.

I’ve never actually written about AI on my blog before, though I have some unpublished drafts if you know how to find them. So I’m going to leave this conversation with a list of my objections to coding with AI. If I’m wrong about it, let this be a record of my wrongness. If I’m right about it, let this be a record of my argument.

A conspiracy theory literally made Trump president: It was his relentless advocacy of the “birther” lie about Barack Obama that turned him from a reality show clown into a political figure. He didn’t deliver the goods then, either: He claimed he had sent investigators to Hawaii “and they cannot believe what they’re finding,” but for some reason the revelations from these phantom investigators were never released.

But that didn’t slow him down. As a 2016 candidate and then president, there was no conspiracy theory too ridiculous for Trump to spread, even if he was usually careful not to come out and say definitively that he believed them. Was Ted Cruz’s dad in cahoots with Lee Harvey Oswald to kill JFK? It sure seems like it. Was Antonin Scalia murdered? “They say they found a pillow on his face, which is a pretty unusual place to find a pillow,” so gosh, who knows? Do vaccines cause autism? Probably. QAnon? “I heard that these are people that love our country.” Did you read a report that made Trump look bad? Fake news, don’t believe it. That’s not to mention voter fraud conspiracies and the “Great Replacement” theory.

Une page simple mais bien faite qui explique 1/ pourquoi et 2/ comment bloquer la pub sur le web.

La publicité pollue nos écrans, nos pensées et nous impose une vision stéréotypée de la société.

  • Elle ralentit l'affichage des autres contenus et consomme de l'énergie inutilement.
  • Elle rend dépendants la presse, les créatrices et créateurs, vidant nos médias de leur diversité.
  • Elle surveille toutes nos actions en ligne pour déceler nos faiblesses et nous pousser à surconsommer.
Classé dans: 

juin

Depuis deux ans, leur vie professionnelle a été bouleversée par la vague de l’IA générative. Récit du combat donquichottesque des traducteurs qui refusent de voir leurs métiers se paupériser, et qui dénoncent la réalité de l’impact de l’IA sur leur secteur, et plus largement sur la langue, la pensée, la culture et la société.

Veo doesn’t work. You get something that looks like it came out of a good camera with good lighting — because it was trained on scenes with good lighting. But it can’t hold continuity for seven seconds. It can’t act. The details are all wrong. And they still have the nonsense text problem.

The whole history of “artificial intelligence” since 1955 is making impressive demos that you can’t use for real work. Then they cut your funding off and it’s AI Winter again.

AI video generators are the same. They’re toys. You can make cool little scenes. In a super limited way.

But the video generators have the same problems they had when OpenAI released Sora. And they’ll keep having these problems as long as they’re just training a transformer on video clips and not doing anything with the actual structure of telling a visual story. There is no reason to think it’ll be better next year either.

“Trump 2.0.” This coinage is often used to distinguish the current Trump administration from the first. The phrase is telling: it underscores that we’re in a new era and that behind the scenes, the people and forces driving this iteration of the Trump administration are different from those that came before. The second Trump administration is being driven to a greater degree by tech billionaires and their interests: self-dealing on public contracts; privatizing public services; expanding corporate power while crushing human rights; promoting crypto; removing guardrails against fraud, waste, and abuse; and pursuing personal vendettas.

The cadre of people driving today’s agenda is leveraging a set of interlocking ideologies related to technology to rhetorically justify and propel these changes. In this series, “Ideologies of Control: A Series on Tech Power and Democratic Crisis,” we asked expert contributors to name and dispel the myths and ideologies that animate their actions. A number of ideological projects prepared the ground for today’s assault on American institutions; we will focus on those that run through data, AI, and the tech sector.

[...]

The ideological agenda behind the headlines positions technology and its architects as power brokers in an increasingly illiberal environment. Their changes are bolstered by the narrative that AI is a force that will inevitably reshape society. To make informed choices about how to respond to this aggressive reshaping of the US government, we must examine these animating ideas. We approached authors for this series who have deep knowledge about the unusual views of reactionaries in the tech industry: whether it be fantasies of libertarian paradises built on defunct sea-based oil rigs, the vision of company towns in Texas and eventually Mars, enthusiasm for debunked 19th Century ideas about race science and eugenics, or the goal of replacing the US dollar as the global reserve currency, these more esoteric backstories can be disorienting to non-specialists. But, seemingly overnight, understanding the ties that bind these projects together—however strange they may seem—has become essential information for all of us to understand and grapple with.

Tech CEOs, futurists, and venture capitalists describe artificial general intelligence (AGI) as if it were an inevitable and ultimate goal for technology development. In reality, the term is a vague signifier for a technology that will somehow lead to endless abundance for humankind — and conveniently also a means to avoid accountability as tech moguls make off with billions in capital investment and, more alarmingly, public spending.

[...]

The second issue is closely related to the first: claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI.

I do not believe that tech culture today has any patience for lightness. Instead, it rewards baroque complexity. Engineers perform feats of convolution. Designers argue for maximal configurability. Founders pride themselves on having their own stack for everything. Lightness = for normies.

[...]

It’s not that these tools are bad. Many are quite good. Obsidian is beautiful. But the way they are used - to signal intellectual rigor, to differentiate oneself from the casuals, to construct a lifestyle rather than solve a problem - shows off a moral hierarchy in which difficulty equates to virtue.

Writing is thinking. It’s a writer’s cliché, but a good one. When you first conceive a lecture, dissertation chapter, a book, even an article for Active History, everything seems so straightforward. “This one will be a cinch,” you think. “Two days, tops.” Once you actually sit down to write, that boundless optimism meets an unceremonious death. Writing is hard. It’s painful. To write is to submit yourself to seasons of self-doubt. The ideas seemed so natural and free-flowing in your head. Now you get to the page. And what comes out is jilted, ham-fisted, and awkward. You are again and again confronted with nagging questions. What am I even trying to say? Who even cares? Why even bother?

Given the nature of this experience, it is not surprising that people would jump at the opportunity to skip the pain and get right to the end. Or at least to an end. Because any writer can tell you that the end is determined by the path taken to get there. Rare is the piece that gets written exactly how it was outlined. Why? Because writing is thinking.

This is America, June 2025. Trump's back, and he's moving fast. Marines - actual Marines - carrying out immigration raids in an American city. It's unprecedented, it's shocking, but here's the thing: it's tragically predictable

This isn't just Trump being Trump. This is the inevitable result of decades of corporate power combining with an authoritarian president. It's been a journey, and we need to understand how we got here.

Meta devised an ingenious system (“localhost tracking”) that bypassed Android’s sandbox protections to identify you while browsing on your mobile phone — even if you used a VPN, the browser’s incognito mode, and refused or deleted cookies in every session.

Next, we preview what may (and should) become the combined sanctioning smackdown of the century, and then we explain — in simple terms (because it’s complicated) — what Meta was doing.

Why do humans have language and other animals apparently don’t? It’s one of the most enduring questions in the study of mind and communication. Across all cultures, humans use richly expressive languages built on complex structures, which let us talk about the past, the future, imaginary worlds, moral dilemmas and mathematical truths. No other species does this.

Yet we are fascinated by the idea that animals might be more similar to us than it seems. We delight in the possibility that dolphins tell stories or that apes can ponder the future. We are social and thinking creatures, and we love to see our reflection in others. That deep desire may have influenced the study of animal cognition.

Il ne s’agit pas ici d’innovation, mais d’une offre publique d’achat hostile sur la monnaie. En l’absence de toute réglementation sérieuse, les stablecoins ne sont ni stables ni une réelle alternative au dollar. Ils ne sont que le cheval de Troie d’une privatisation de l’argent.

Self-experimentation is exactly how smart people get pulled into homeopathy or naturopathy, for example. It’s what makes them often more likely to fall for superstitions and odd ideas. The smart person’s self-identity means they can’t believe their own psychological biases are fooling them.

Don’t self-experiment with psychological hazards! I can’t stress this enough!

There are many classes of problems that simply cannot be effectively investigated through self-experimentation and doing so exposes you to inflicting Cialdini-style persuasion and manipulation on yourself.

The failure of these predictions was not accidental. It reflected fundamental flaws in the methods used to forecast the future of work. Neither the 2013 study nor its 2023 successor based their projections on empirical investigations of real workplaces, workers, or production processes. Instead, both relied on the subjective judgments of computer scientists and economists, who were asked to guess whether certain tasks could, in principle, be performed by machines. If enough tasks associated with a job were deemed automatable — typically more than 50 percent — the entire occupation was classified as at risk of disappearance. No consideration was given to how jobs are structured in practice, how tasks are bundled together, or how economic and social factors mediate the adoption of new technologies. The result was a deeply mechanistic model of technological change, in which machines would displace workers whenever technically feasible, regardless of cost, institutional barriers, or political resistance. It was a model blind to the complex ways in which work is organised, contested, and transformed — and thus singularly ill-equipped to predict the actual course of economic development.

[...]

The fundamental limitations of generative AI have become increasingly apparent, even to many of its early champions. As François Chollet and other AI researchers have pointed out, deep neural networks suffer from inherent brittleness: they struggle to generalise beyond their training data, fail at basic reasoning tasks, and remain unreliable for applications requiring consistency or accuracy. Despite hoovering up enormous quantities of digital information, these models confront hard limits in their capacity to learn or adapt. Part of the problem lies in the very architecture of artificial neural networks, which were designed on the basis of mid-twentieth-century behaviourist psychology. Unlike organic brains — which come equipped with rich internal structures, innate goals, and representational frameworks — artificial networks are largely unstructured, relying on brute statistical association. The human mind can grasp a new concept from a few examples; machine learning models typically require millions. Hybrid approaches that integrate symbolic reasoning with deep learning, such as AlphaFold2’s success in protein folding, offer more promising results — but they must be painstakingly designed for specific tasks. There is no general method for building modular, structured cognition into deep learning systems, and no guarantee that one will be found.

[...]

The lessons of the past decade should temper both our hopes and our fears. The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process. Legal frameworks, collective bargaining, public investment, and democratic regulation all play decisive roles in shaping how technologies are developed and deployed, and to what ends.

The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many. Yet it does not have to be this way. The future remains open, contingent on whether we are willing to confront, contest, and redirect the pathways along which technology advances.

Classé dans: 

Large language models work the same way as a carnival psychic. Chatbots look smart by the Barnum Effect — which is where you read what’s actually a generic statement about people and you take it as being personally about you. The only intelligence there is yours.

This is how people fall for chatbot girlfriends. They know it’s a bot, but they fall in love with the personality they’ve projected onto the generic statement generator.

There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app.

Classé dans: 

What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)

In the best case (not always reached) they can write python code, supplementing their own weaknesses with outside symbolic code, but even this is not reliable. What this means for business and society is that you can’t simply drop o3 or Claude into some complex problem and expect it to work reliably.

Worse, as the latest Apple papers shows, LLMs may well work on your easy test set (like Hanoi with 4 discs) and seduce you into thinking it has built a proper, generalizable solution when it does not.

Classé dans: 

Google back then prided itself on broadcasting its Best Place To Work award, won year after year after year. Younger people will have trouble picturing this, but Google used to nurture an image of being the “good one” among megacorps; they championed open standards (except when they didn’t), supported open source projects (until they backstabbed them), and used language that corporate wasn’t supposed to use, like “don’t be evil” (until they, infamously and in a true dark comedy move, retracted that motto). The work environment was all colourful, nerdy cool, not a single necktie in sight—this was seen as brave and refreshing rather than cringe and tired, you see. And they made a big deal out of something called “20% time”: Every engineer was promised 1/5 of their work time for themselves, to do anything they want. (Google owners will still own whatever you create during your 20% time, natürlich). Famously, Gmail came out of someone exploring their interests during 20% time.

I don’t think much of anything else came out of it, though.

Le secteur technologique est de nouveau secoué par un scandale retentissant. Builder.ai, une entreprise autrefois présentée comme une étoile montante de l'intelligence artificielle, promettant de révolutionner la création d'applications en le rendant « aussi simple que de commander une pizza », s'est avérée être au cœur d'une vaste supercherie. Pendant huit ans, la société a fait croire à ses clients et investisseurs que sa plateforme était animée par une IA sophistiquée, alors qu'en réalité, une armée de programmeurs, principalement basés en Inde, accomplissait le travail manuellement. L'affaire a conduit à des pertes de plusieurs dizaines de millions de dollars pour les investisseurs et à la faillite de l'entreprise.

Classé dans: 

The Last of Us Part II is an incredible journey that provides not only one of the most mesmerizing spectacles that we’ve seen from big budget video games, but one that manages to ask difficult questions along the way. It’s clearly coming from an emotionally authentic and self-examining place. The trouble with it, and the reason that Ellie’s journey ultimately feels nonsensical, is that it begins from a place that accepts “intense hate that is universal” as a fact of life, rather than examining where and why this behavior is learned.

Critically, by not asking these questions, and by masking its point of view as being evenhanded, it perpetuates the very cycles of violence it’s supposedly so troubled by.

mai

Un article de Ars Technica qui explore (nombreux exemples à l'appui) les capacités assez impressionnantes de Veo 3, le dernier modèle d'IA générative de vidéos dévoilé par Google.

Quelques remarques :

  • Bien sûr on pense immédiatement à la capacité de ces outils pour produire des fake news et par conséquent contribuer à la désinformation. Mais l'effet plus délétère et inquiétant est peut-être indirect : lorsque le grand public s'habitue à ne plus rien croire (pour éviter de se faire avoir notamment par ce genre de vidéos), notre capacité collective à communiquer et informer est remise en question de manière fondamentale. Quelque part cela rappelle la fameuse stratégie de Steve Bannon pour détruire la démocratie : flood the zone with shit, "inonder la zone avec de la merde" pour lui ôter toute valeur et la rendre inutilisable.
  • Je pense quand même qu'il y a une différence entre le fait d'être simple spectateur de ce genre de vidéo (on est facilement bluffé par le résultat) et le fait de produire celle-ci. Les quelques expériences que j'en ai faites (sur d'autres outils du même genre, certes moins sophistiqués mais qui reposent sur les mêmes bases) m'ont surtout fait prendre conscience que quand on a une intention créatrice en tête, il est très difficile de contrôler l'algorithme pour qu'il fasse réellement ce qu'on veut. Face à la qualité un peu "magique" de ces interfaces conversationnelles auxquelles on peut communiquer des instructions comme si on parlait (écrivait) à un humain, il est facile d'oublier que le langage est un outil hautement conceptuel, qui repose sur l'abstraction, et qui appauvrit fondamentalement la réalité qu'il décrit. C'est très efficace pour communiquer entre humains qui ont la même compréhension viscérale et intuitive du réel, mais probablement pas aussi adapté pour un outil qu'on aimerait contrôler précisément. Un artiste maître de son art sera incomparablement plus précis pour obtenir le résultat qu'il souhaite avec son pinceau, que n'importe quel "prompt engineer" avec une IA.
  • Comme l'écrit Tante sur Mastodon (source 1 et source 2) : les IA génératives nous poussent imperceptiblement à baisser nos standards pour nous convaincre que ce qu'elles produisent correspond à ce qu'on voulait.
  • En d'autres termes, l'IA n'est pas un outil de création comparable à un crayon, une machine à écrire, ou même à un logiciel informatique de 3D traditionnelle, par exemple. Créer avec une IA s'apparente plus à demander à un artiste de créer quelque chose pour nous. Quelqu'un qui dirait "je suis un créateur parce que j'ai dû trouver les bons mots pour demander à tel artiste de me peindre un tableau" ne serait pas pris très au sérieux...
  • La comparaison est d'ailleurs d'autant plus pertinente que l'IA doit effectivement vampiriser le travail des artistes (généralement de manière non-consensuelle) pour pouvoir créer quelque chose; l'IA constitue donc une sorte d'interface entre l'utilisateur et l'ensemble des artistes humains qui ont servi à l'entraîner, artistes dont les travaux ont été mélangés et anonymisés dans un immense modèle statistique. Quand on demande à une IA de créer une image, on demande indirectement à des artistes de la créer... Sauf que leur travail a été fait en amont, qu'ils ne sont pas crédités ni payés, et que le résultat contribue à dévaloriser leur activité.
  • Pour impressionnantes qu'elles soient, ces vidéos restent toujours très lisses. Comme l'article le souligne, c'est le résultat du principe fondamental de ce genre de modèle, dont les capacités de production sont conditionnées par le matériel sur lequel il a été entraîné. Le résultat sera toujours dans une certaine "moyenne" des données d'entraînement.

The social media billionaires want us to believe their putrid platforms are the “new town squares.” That their half-baked social videos and for-you feeds have replaced the agora. And that, somehow, we’re all better for it.

It’s the most overused, meaningless claim on the Internet.

A litany of exploiters and extractors have trotted out that same phrase to excuse the vagaries of their platforms and give themselves a free pass to irresponsibility, hostility, and self-enrichment.

But their apps — virtual panopticons — have more in common with prison yards and shopping malls than public forums.

avril

Climate change is one of the instances, Stiglitz and Stern told me in an email, in which “it is generally agreed there is extreme risk — we know there are some really extreme events that could occur — and we know we cannot pretend (i.e., act as if) we know the probabilities. Nordhaus’s work doesn’t appropriately take into account either extreme risk or deep uncertainty.”

In other words, the economist who has been embraced as a guiding light by the global institution tasked with shepherding humanity through the climate crisis, who has been awarded a Nobel for climate costing, who is widely feted as the doyen of his field, doesn’t know what he’s talking about.

For years now, many AI industry watchers have looked at the quickly growing capabilities of new AI models and mused about exponential performance increases continuing well into the future. Recently, though, some of that AI "scaling law" optimism has been replaced by fears that we may already be hitting a plateau in the capabilities of large language models trained with standard methods.

Classé dans: 

The verdict is in: OpenAI's newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called "scaling laws" cited by many for years have possibly met their natural end.

Classé dans: 

The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning”, researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.

Nous avons récemment consacré un petit billet au cas particulier d’Alain Bentolila : cet ancien professeur d’université inonde les médias d’une intox sur des jeunes qui « vivraient avec 400 mots » ou 500, ou 800, tout en sachant pertinemment que cela est faux (billet ici).

Mais il n’est pas le seul à véhiculer des chiffres fantaisistes sur la taille du vocabulaire, et plutôt que vous laisser observer des guerres de chiffres invérifiables, on vous propose de prendre les choses en mains et vérifier par vous-mêmes. Ce n’est pas si compliqué, on vous donne toutes les billes ici et en quelques minutes vous serez autonomes.

If you take just one thing away from this article, I want it to be this: please build your own website. A little home on the independent web.

A reflection of your personality in HTML and CSS (and a little bit of JS, as a treat). This could be a professional portfolio, listing your accomplishments. It might be a blog where you write about things that matter to you. It could even be something very weird and pointless (even better) – I love a good single-joke website. Ultimately, it's your space and you can do whatever you want with it.

"We're in the very early days looking at this problem from an ecosystem level," Larson told The Register. "It's difficult, and likely impossible, to quantify how many attempted installs are happening because of LLM hallucinations without more transparency from LLM providers. Users of LLM generated code, packages, and information should be double-checking LLM outputs against reality before putting any of that information into operation, otherwise there can be real-world consequences."

[...]

"Even worse, when you Google one of these slop-squatted package names, you’ll often get an AI-generated summary from Google itself confidently praising the package, saying it’s useful, stable, well-maintained. But it’s just parroting the package’s own README, no skepticism, no context. To a developer in a rush, it gives a false sense of legitimacy.

It's not a history you learn about in school—we were whitewashing history long before the current executive orders—but the Klan in the '20s was everywhere. There were millions of Klan members across the country. People joined it like they were joining a golf club or the Elks Lodge. There was a women's auxiliary. There was the Ku Klux Kiddies, for children. Klan rallies were held across the country; thousands would turn up at fairgrounds for the marching bands and cross burnings. In 1925, the Klan even held a march down Pennsylvania Avenue in Washington DC. Tens of thousands strong, crowds were six deep in the streets to watch and cheer. They did it again the next year.

“Much of what I did I now regret,” Bill Burr told The Wall Street Journal recently, admitting that his research into passwords mostly came from a white paper written in the 1980s, well before the web was even invented. “In the end, [the list of guidelines] was probably too complicated for a lot of folks to understand very well, and the truth is, it was barking up the wrong tree.”

Classé dans: 

Kennedy exploits the language of the "wellness" industry, with its misleading emphasis on "natural" health care and "letting" your body heal itself. What's ironic is that's what vaccines do. Vaccines work by stimulating the body's natural immune response, so that it prevents infection using the body's own resources. All these "treatments" Kennedy touts aren't just ineffective, they're not "natural." They're blitzing a child with often overwhelming amounts of medication, which won't work but could make the kid even sicker. 

Fascist movements, formal or otherwise, operate much like cults. When one joins them, they are at first overwhelmed with the appearance of validation and support. This is called "love bombing" and it's meant to endear you to them, to become dependent on them to meet your emotional needs. Then that validation and support is gradually withheld in order to pressure the target into severing ties with the "normie" world. They drive away anyone outside of their hateful little bubble. So their entire network of social support depends on how useful they can be to the cause. It's the only way they can feel part of something anymore because nobody else wants anything to do with them at this point.

C’était il y a 24 ans. Le 15 Janvier 2001 naissait Wikipédia. dans un monde numérique où Google était lui-même né en 1998 et le web encore quelques années avant (disons vers 1991 pour faire simple même si sa date de naissance officielle est plutôt en Mars 1989).

A l’image d’une chanson de Cabrel, elle a en effet dû faire toutes les guerres pour être si forte aujourd’hui. Et aujourd’hui encore elle est la cible d’une offensive coordonnée qui va des USA jusqu’à la France. Une offensive d’une violence et d’une portée rarement atteinte. Avec en tête un Musk qui rêve de sonner l’Hallali de l’encyclopédie.

For its right wing adherents, the absence of humans is a feature, not a bug, of AI art. Where mechanically-produced art used to draw attention to its artificiality – think the mass-produced modernism of the Bauhaus (which the Nazis repressed and the AfD have condemned), or the music of Kraftwerk – AI art pretends to realism. It can produce art the way right wingers like it: Thomas Kinkade paintings, soulless Dreamworks 3D cartoons, depthless imagery that yields only the reading that its creator intended. And, vitally, it can do so without the need for artists.

Javier Milei, a prodigious user of AI-generated art, wants Argentinians to know that any of them could join the 265,000, mostly young people who have lost jobs as a result of the recession that he induced, to the rapturous praise of economic elites. He wants to signal that anyone can find themselves at the wrong end of his chainsaw, even if doing so means producing laughably bad graphics for the consumption of his 5.9 million deeply uncritical Instagram followers.

Quoi de plus neutre, dit-on, qu’un ordinateur ? Erreur : derrière leurs verdicts froids, algorithmes et automates encapsulent tous les biais des humains qui les conçoivent. Basée sur le modèle de l’individu calculateur, héritière d’une histoire tissée de choix idéologiques, l’intelligence artificielle est une machine politique. La mettre au service du bien commun implique d’abord de la déconstruire.

Classé dans: 

Ce type d’intervention politique sur les archives n’a rien d’anodin. L’histoire regorge d’exemples où la manipulation ou la destruction d’archives a servi des régimes autoritaires. De l’Allemagne nazie à l’Union soviétique, en passant par la Révolution culturelle chinoise et le régime des Khmers rouges, les archives ont toujours représenté un enjeu de pouvoir majeur. L’accès à une information libre, fiable et contextualisée est une menace pour ceux qui cherchent à réécrire l’histoire à leur avantage.

La désinformation en ligne fait partie intégrante de la stratégie des extrêmes droites, tandis que les formations de gauche, qu’elles soient radicales, écologistes ou sociales-démocrates, ont moins recours à cette pratique, selon une étude récente réalisée par des universitaires néerlandais.

It seems that the more people learn about Musk, the less they like him—at least, according to a new Hart Research survey published Wednesday by Groundwork Collective and Public Citizen. The poll asked respondents about how much influence they felt Musk should have in government, explaining aspects of his role in DOGE, his lack of oversight, and his far-reaching access.

By the end of the survey, 63 percent of voters reported having an unfavorable opinion of Musk, an increase of nine points from the beginning of the survey. Meanwhile, only 32 percent of respondents had a favorable opinion, which was down 7 percent from the start, and showed a major negative swing among non-MAGA Republicans.

Classé dans: 

Imaginez un monde dans lequel, au moment même où une régulation du tabac était débattue au parlement européen, les plateaux télé, les radios et les journaux invitaient systématiquement le directeur de la recherche de Philip Morris, en le présentant comme un pionnier scientifique de la chimie des nouvelles générations de cigarettes. Omettant d’évoquer sa casquette industrielle et ses conflits d’intérêts, les journalistes le questionneraient sur l’avenir de la cigarette, les enjeux sur la compétitivité économique du pays et les régulations adéquates.

mars

If you follow American politics, you may think of a “pro-business” platform as something that consists of the things that business interests often lobby for: lower taxes, less regulation period, less oversight, less protection for labor, less responsibility of all sorts to anyone other than their own shareholders. Yet all these things that they seek (for the purpose of increasing short term profits) are things that they assume will exist within the context of the basic principles outlined above. Businesses want lower taxes, but they still want well-maintained roads. They want weaker labor protections, but they still want a healthy and well educated workforce. They want less regulation, but they still want transparent laws and functional enforcement. Their short term greed, unwise and distasteful as it may be, is only something they fight for because they assume that the big, fundamental pillars of society and government that allow them to operate freely will always be in place.

The MP3 format, once the gold standard for digital audio files, is now free. The licensing and patents on MP3 encoders have expired, meaning you can now include them in your applications without paying royalties. For software developers and audio enthusiasts, this might seem like a big deal. But, surprisingly, almost no one noticed. Why? Because the world of technology has changed so drastically that MP3's significance has faded into the background.

Classé dans: 

Meta's Llama fails this standard by withholding critical components like training data and methodology, and by limiting transparency and community modification. 

Meta's Llama models also impose licensing restrictions on its users. For example, if you have an extremely successful AI program that uses Llama code, you'll have to pay Meta to use it. That's not open source. Period.

Classé dans: 

Though many no doubt feel betrayed by what seems like a sudden rightward turn, billionaires like Gates have always behaved like wolves in sheep's clothing, prioritizing their fortunes above all.

For example, Gates was heavily involved in establishing the Global Fund, a privately-funded rival to the World Health Organization. While the Global Fund did improve global vaccination rates, the cost of basic medicines skyrocketed thanks to his introduction of for-profit actors into global health efforts — another sector made to rely on the generosity of billionaires.

Likewise, it takes willful blindness to Trump’s own history of explicit racism to treat his crusade against diversity and integration as an embrace of meritocracy (please ignore the people he has chosen to lead the government) rather than a function of the same bigotries that drove him to attack Barack Obama as illegitimate and unfit to be president.

There is no evidence that Trump is a figure of deep thought or serious insight. There is no evidence that Trump is anything other than what he’s been for his entire time in the public eye: an ego-driven creature of boundless envy and vicious, overlapping resentments. Those resentments have led him on a grand tour of retribution against the public.

L’économiste britannique Stanley Jevons (1835-1882) est passé à la postérité pour la découverte d’un paradoxe spectaculaire. Il a constaté que, plus on inventait des machines économes en charbon, plus sa consommation globale augmentait. Un constat qui désespère encore aujourd’hui les écologistes et renforce leur méfiance envers le progrès technique comme solution au problème du réchauffement climatique.

Le dernier rapport de l’Agence internationale de l’énergie (AIE) apporte une nouvelle illustration du phénomène. Pour la première fois depuis cinquante ans, le pétrole n’a représenté, en 2024, que 30 % de la consommation mondiale d’énergie. A l’inverse, les énergies renouvelables et le nucléaire dépassent désormais les 40 % du total. Pour autant, ni la consommation de pétrole ou de charbon, ni les émissions de gaz à effet de serre, + 0,8 % toutes les deux, ne diminuent.

« Comme l’a noté la linguiste Emily M. Bender, les enseignants ne demandent pas aux étudiants d’écrire des essais parce que le monde a besoin de plus d’essais d’étudiants. Le but de la rédaction d’essais est de renforcer les capacités de réflexion critique des étudiants. De la même manière que soulever des poids est utile quel que soit le sport pratiqué par un athlète, écrire des essais développe les compétences nécessaires pour tout emploi qu’un étudiant obtiendra probablement. Utiliser ChatGPT pour terminer ses devoirs, c’est comme amener un chariot élévateur dans la salle de musculation : vous n’améliorerez jamais votre forme cognitive de cette façon. Toute écriture n’a pas besoin d’être créative, sincère ou même particulièrement bonne ; parfois, elle doit simplement exister. Une telle écriture peut soutenir d’autres objectifs, comme attirer des vues pour la publicité ou satisfaire aux exigences bureaucratiques. Lorsque des personnes sont obligées de produire un tel texte, nous pouvons difficilement leur reprocher d’utiliser tous les outils disponibles pour accélérer le processus. Mais le monde se porte-t-il mieux avec plus de documents sur lesquels un effort minimal a été consacré ? Il serait irréaliste de prétendre que si nous refusons d’utiliser de grands modèles de langage, les exigences de création de textes de mauvaise qualité disparaîtront. Cependant, je pense qu’il est inévitable que plus nous utiliserons de grands modèles de langage pour répondre à ces exigences, plus ces exigences finiront par devenir importantes. Nous entrons dans une ère où quelqu’un pourrait utiliser un modèle de langage volumineux pour générer un document à partir d’une liste à puces, et l’envoyer à une personne qui utilisera un modèle de langage volumineux pour condenser ce document en une liste à puces. Quelqu’un peut-il sérieusement affirmer qu’il s’agit d’une amélioration ? »

Classé dans: 

The newspapers this morning all contain analysis pieces trying to explain why Trump is imposing 25 percent tariffs on Canada and Mexico. You can see the writers struggling, because this is a profoundly self-destructive move — it will impose huge, possibly devastating costs on U.S. manufacturing, while significantly raising the cost of living — without any visible justification. Yet the conventions of mainstream journalism make it hard to say directly that the president’s actions are just vindictive and senseless.

[...] hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.

Classé dans: 

Comment les différentes facettes de la "gun culture" américaine (qui n'est pas du tout uniforme) prennent racine dans les différentes cultures qui ont colonisé les différents territoires US il y a plusieurs siècles. Et comment ces différences culturelles, qu'on pourrait croire gommées par des siècles de coexistence au sein d'une seule nation, subsistent et colorent encore fortement la vie politique actuelle.

À l’inverse des plateformes centralisées telles que Facebook ou X, qui reposent sur des infrastructures et des algorithmes contrôlés par des entreprises privées, le fédiverse fonctionne grâce à une multitude de serveurs indépendants (également appelés instances), gérés par des individus ou des collectifs. Ces instances communiquent via des protocoles ouverts, comme ActivityPub, le plus utilisé. Standardisé par le World Wide Web Consortium (W3C), il permet aux usagers d’interagir tout en conservant la liberté de choisir leur serveur et d’en changer.

“The observed ideological shifts raise important ethical concerns, particularly regarding the potential for algorithmic biases to disproportionately affect certain user groups,” the study authors wrote. “These biases could lead to skewed information delivery, further exacerbating social divisions, or creating echo chambers that reinforce existing beliefs.”

Classé dans: 

Focus feels impossible right now. There is so much happening—so much awful news breaking at an unrelenting pace, so many warning signs and red flags being hoisted—that it feels like you can't look away. At least, it feels like that to me. Which means that you're looking at a cascade of horrors instead of the things you're actually supposed to be doing.

This is something far beyond simple doomscrolling, this is full-on doom living. And it's completely untenable. And yet most of the time it feels impossible to shake.

If enough people tried—just tried, even imperfectly—things would shift. If more people opted for alternatives when they could, if more people supported independent platforms even three times out of five, if more people put even a fraction of their energy into challenging the defaults, it would matter.

A lot of folks don't, or won't, because they think - they've been scolded into believing - that if they can't do it completely, it's not worth doing at all.

Classé dans: 

Entretien avec l'historien Johann Chapoutot, spécialiste du nazisme.

Au moment où le centre et les conservateurs se rapprochent de l’extrême droite dans de nombreux pays d’Europe, dont la France, je rappelle que l’arrivée d’Hitler au pouvoir est liée, non à un vote populaire mais à une décision politique cynique des libéraux autoritaires – Franz von Papen en l’occurrence. Ce dernier a pensé que la meilleure manière de se maintenir au pouvoir, c’était de s’allier avec les nazis. Or cette décision a été prise dans un contexte d’effondrement du parti nazi qui aurait pu conduire à sa disparition.

Cette histoire montre que l’arrivée des nazis au pouvoir n’était pas inéluctable, pas plus que le « retour » de l’extrême droite aujourd’hui n’est inéluctable. Il n’y a pas de phénomènes géologiques ou tectoniques ou hydrographiques à l’œuvre, il y a des acteurs politiques avec leurs intérêts, leurs décisions, leurs responsabilités.

Plusieurs questions du même acabit ont ainsi été posées à une dizaine de chatbots parmi les plus connus - de ChatGPT à Copilot de Microsoft, en passant par Claude d’Anthropic - pour évaluer à quel point ces agents conversationnels avaient été "infectés" par une vaste campagne de désinformation russe.

Les résultats ne sont pas encourageants : environ une fois sur trois, ces AI fournissent des réponses teintées de faux narratifs inspirés par la propagande russe, assurent les experts de Newsguard. Ainsi, six de ces machines expliquaient les raisons pour lesquelles Volodymyr Zelensky avait interdit en Ukraine le réseau social de Donald Trump Truth Network… alors même que le président ukrainien n’a jamais rien fait de la sorte.

Like it or not, BP doesn’t have the luxury of saying: “Oh, we’ll do something less profitable but better for the planet.” Capitalism chews you up and spits you out if you do that. “Shareholder value” is not a consulting gimmick, or at least not only that; it is a very real disciplinary force.

All of this, to be clear, is not to absolve BP of responsibility. Rather, it is to make a case about how we should understand the problems we face – that is, not as a problem of greedy individual firms, but a system rigged against positive change.

A declassified World War II-era government guide to “simple sabotage” is currently one of the most popular open source books on the internet. The book, called “Simple Sabotage Field Manual,” was declassified in 2008 by the CIA and “describes ways to train normal people to be purposefully annoying telephone operators, dysfunctional train conductors, befuddling middle managers, blundering factory workers, unruly movie theater patrons, and so on. In other words, teaching people to do their jobs badly.” 

Au risque de faire bondir l’ensemble de mes camarades qui travaillent sur les modèles d’IA, nous sommes déjà au bout du cycle de développement de ce que l’on qualifie aujourd’hui « d’IA conversationnelle ». Ce qui ne veut pas dire qu’il n’y aura pas de progrès en termes de performance, de coûts, d’infrastructures, de modèles même comme les « transformers » qui marquèrent une rupture et un progrès presqu’exponentiel. Bien sûr qu’il y aura des progrès. Mais le narratif d’une « intelligence artificielle générale » est une mythologie moderne. Et comme toutes les mythologies, elle est là pour nous avertir à la fois d’un aveuglement, d’un risque et d’une dérive en les mettant en récit. Et il est assez fou que nous ne la traitions presque jamais comme telle.

Classé dans: 

Americans are, of course, the most thoroughly and passively indoctrinated people on earth. They know next to nothing as a rule about their own history, or the histories of other nations, or the histories of the various social movements that have risen and fallen in the past, and they certainly know little or nothing of the complexities and contradictions comprised within words like “socialism” and “capitalism.” Chiefly, what they have been trained not to know or even suspect is that, in many ways, they enjoy far fewer freedoms, and suffer under a more intrusive centralized state, than do the citizens of countries with more vigorous social-democratic institutions. This is at once the most comic and most tragic aspect of the excitable alarm that talk of social democracy or democratic socialism can elicit on these shores. An enormous number of Americans have been persuaded to believe that they are freer in the abstract than, say, Germans or Danes precisely because they possess far fewer freedoms in the concrete. They are far more vulnerable to medical and financial crisis, far more likely to receive inadequate health coverage, far more prone to irreparable insolvency, far more unprotected against predatory creditors, far more subject to income inequality, and so forth, while effectively paying more in tax (when one figures in federal, state, local, and sales taxes, and then compounds those by all the expenditures that in this country, as almost nowhere else, their taxes do not cover). One might think that a people who once rebelled against the mightiest empire on earth on the principle of no taxation without representation would not meekly accept taxation without adequate government services. But we accept what we have become used to, I suppose. Even so, one has to ask, what state apparatus in the “free” world could be more powerful and tyrannical than the one that taxes its citizens while providing no substantial civic benefits in return, solely in order to enrich a piratically overinflated military-industrial complex and to ease the tax burdens of the immensely wealthy?

Lost in the spectacle of billionaires catfighting on decaying social media platforms is something mildly more consequential: the firesale of America - and the world's - future. While we obsess over Musk's bloviations and Altman's careful rebuttals, the actual mechanisms of AI power are being divided up among a handful of private entities, operated by oligarchs and funded by overseas interests, with the blessing of an Autocrat.

[...]

Stargate isn't a battle between billionaires or a test of financial commitments. It's a preview of how power will flow in the AI age, through layers of technology, capital, and influence that would baffle the monopolists of the past. As Musk and Altman trade barbs on social media, they're actually fighting over who gets to be the new robber barons – and we're left wondering whether anyone has the will or the means to stop them. This isn't progress. It's a heist.

In 2021, U.S. Sen.Ted Cruz compared critical race theory — an academic subfield that examines the role of racism in American institutions, laws, and policies — to the Ku Klux Klan, the most notorious homegrown terrorist organization in U.S. history. In doing so, he opened a playbook that resembles one put into practice by Florida Gov. Ron DeSantis and others: Attack ideas that are unfriendly to a narrow view of the world, and do so by eliminating them from our school curricula and public conversation. The movement against critical race theory has now swallowed up high school Advanced Placement African American Studies in several states and threatens the teaching of basic facts about U.S. history. And this movement has devolved from pundit tough talk into authoritarian policies to ban books, modify curricula, and threaten intellectual freedom across the country (and world).

By now, many realize that these policies are a harbinger of things to come —  even for fields ostensibly unrelated to African American studies, like biology. Modern breakthroughs in biology are producing a picture of life that is increasingly incompatible with authoritarian preferences for neat boxes that dictate what people are and how they should behave. Consequently, biologists must shed the naive belief that our work is apolitical and recognize that the recent attacks on how to teach U.S. history are a battle in a larger war on ideas that includes the natural sciences.

By flooding the zone with an endless stream of new partnerships, new products, new promises, the tech industry makes us feel disoriented and overwhelmed by a future rushing at us faster than we can handle. The desire to not be left behind — or taken advantage of — is a powerful motivator that keeps us engaged in the AI sales pitch. The breathless hype surrounding AI is more than just a side-effect of over-eager entrepreneurs; it’s a load-bearing column for the tech sector. If people believe hard enough in the future manufactured by Silicon Valley, then they start acting like it already exists before it happens. Thus the impacts of technologies like AI become a self-fulfilling prophecy.

février

Even with all the profound advantages the South Pole enjoys compared to Mars, even on a planet where living things have spent billions and billions of years figuring out how to adapt to and thrive within an incredibly diverse array of biomes—on a planet where giant tubeworms the size of NBA basket stanchions have colonized lightless ocean depths at which a human would be crushed like a grape under a piano—the South Pole simply cannot support complex life. It is too cold, and its relationship with sunlight too erratic, for living things to sustain themselves there. On astronomical scales it is for all practical purposes in the exact same spot as some of the most life-rich and biodiverse places in the known universe, and yet no species has established a permanent self-sustaining population there. Ever.

[...]

Even with steady year-round subtropical sunlight, even with conditions infinitely more nurturing than those found anywhere on Mars, the summit of Mount Everest cannot support complex life. It's too cold; the air is too thin; there is no liquid water for plants and animals to drink. Standing on the top of Mount Everest, a person can literally look at places where plants and animals happily grow and live and reproduce, yet no species has established a permanent self-sustaining population on the upper slopes of Everest. Even microbes avoid it.

Life on earth writ large, the grand network of life, is a greater and more dynamic terraforming engine than any person could ever conceive. It has been operating ceaselessly for several billions of years. It has not yet terraformed the South Pole or the summit of Mount Everest. On what type of timeframe were you imagining that the shoebox of lichen you send to Mars was going to transform Frozen Airless Radioactive Desert Hell into a place where people could grow wheat?

Est-il acceptable, souhaitable, responsable d'utiliser des arguments d'essence néo-libérale (rentabilité, performances, compétitivité etc) pour défendre des valeurs progressistes ?

Whenever I make an argument like that I get similar responses: “You have to be pragmatic” or “Games of intellectual purity don’t get us anywhere” or “The argument needs to work for those we want to convince” or “Shut up” (it still is the Internet after all). And I am not saying that those replies don’t have any value. They are pragmatic. Saying something in a way that people in power like or that supports their world view increases the chances of creating change. Especially when one has facts and studies at one’s disposal (at least that was how it maybe was a bit in the past before the reign of Musk).

But I think there is a cost. Because I think people mistake tactics for strategy. The basic difference between tactics and strategy is that while tactics focus on smaller, short-term actions, strategy looks at the long-term big picture. And I feel like that is where the costs stack up.

Classé dans: 

IA et vie privée : selon les spécialistes de la sécurité, tout n'est pas si rose et malgré le grand mixage qu'est la phase d'entraînement du modèle de données, il est tout à fait possible que des données privées soient involontairement préservées, et donc publiquement accessibles dans le modèle final.

I think most privacy experts would agree with this post so far. There are divergences of opinion when you start asking "do the benefits of AI outweigh the risks". If you ask me, the benefits are extremely over-hyped, while the harms (including, but not limited to, privacy risks) are very tangible and costly. But other privacy experts I respect are more bullish on the potentials of this technology, so I don't think there's a consensus there.

AI companies, however, do not want to carefully weigh benefits against risks. They want to sell you more AI, so they have a strong incentive to downplay the risks, and no ethical qualms doing so. So all these facts about privacy and AI… they're pretty inconvenient. AI salespeople would like it a lot if everyone — especially regulators — stayed blissfully unaware of these.

Classé dans: 

Le sommet de Paris fait suite à deux rassemblements précédents, organisés à Londres en 2023 et à Séoul en 2024, dédiés en théorie à la régulation de l’intelligence artificielle, mais consacrés en pratique à faire la promotion de cette technologie, à la présenter comme inévitable et surtout à poser les industriels eux-mêmes comme acteurs incontournables de toute future réglementation. Ces sommets sont parfois présentés - abusivement, dans la mesure où ils n’ont caractère intergouvernemental officiel et de n’appuient ni ne visent la rédaction d’aucun traité – comme des « COP de l’IA ». Et pourtant, on y retrouve bien le même mélange de discussions techniques, d’annonces publicitaires et de « business as usual  » qui constitue désormais la norme en matière de gouvernance mondiale.

[...]

Comme le rappelle le média spécialisé Synth, les géants de la tech sont passés maîtres dans l’art de s’imposer comme leurs propres régulateurs. Après le sommet de Londres en 2023 a ainsi été annoncée la création d’un « AI Safety Institute », dont les rênes ont immédiatement été confiés à un représentant du secteur du capital-risque et une cadre d’OpenAI. La France suit la même tendance, mais en limitant encore plus les ambitions en matière de régulation.

Classé dans: 

By crunching all the different datasets together, the researchers were able to create what they described as an aggregate “factuality score” for each politician and each party, based on the links that MPs had shared on Twitter.

The data showed conclusively that far-right populism was “the strongest determinant for the propensity to spread misinformation”, they concluded, with MPs from centre-right, centre-left and far-left populist parties “not linked” to the practice.

By crunching all the different datasets together, the researchers were able to create what they described as an aggregate “factuality score” for each politician and each party, based on the links that MPs had shared on Twitter.

The data showed conclusively that far-right populism was “the strongest determinant for the propensity to spread misinformation”, they concluded, with MPs from centre-right, centre-left and far-left populist parties “not linked” to the practice.

Une vidéo très claire sur le fonctionnement de la dette des états, et pourquoi faire du chantage en prétendant que "nos enfants devront la rembourser" pour pousser des politiques d'austérité ne tient pas debout.

Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But "some of the most exciting people are in the 'let it burn' category," Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.

Geuter told Ars that he considers Nepenthes "more of a sociopolitical statement than really a technological solution (because the problem it's trying to address isn't purely technical, it's social, political, legal, and needs way bigger levers)."

To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the "most aggressive" example of "technologies that are not done 'for us' but 'to us.'"

Classé dans: 

This is all being spun by the Trump administration as an effort to save money and reduce government "waste," but no one should be fooled. The sadism of these efforts belies the psychological damage motivating people like Musk and Russ Vought, the Project 2025 author Trump nominated to run the Office of Management and Budget (OMB). MAGA is certainly a racist and sexist movement, but it's crucially also a movement of bullies lashing out at people whose skills and talents remind MAGA folks of their own insecurities. Nowhere is this more evident than in the unhinged MAGA hatred of federal workers, a group largely known for being humble and hard-working, reminding MAGA leaders of their own lack of basic virtues. 

Bitcoin was never used by most Salvadorans, its modern city was never built, and now it will cease to be legal tender in El Salvador, the first country in the world to adopt it in 2021: a complete failed economic bet by President Nayib Bukele. Congress, dominated by the ruling party, approved last Wednesday a confusing reform to the Bitcoin Law at the request of Bukele’s government, which had no other option to receive the $1.4 billion credit agreed in December with the International Monetary Fund (IMF).

Résumer un texte implique de comprendre les enjeux de celui-ci, en lien avec le monde réel et avec les intentions et valeurs de l'auteur, ce dont les IA génératives sont incapables. À la place, elle raccourcissent le texte (suivant une logique où ce qui prend le plus de place est le plus important, ce qui est loin d'être toujours vrai), et tendent également à le déformer en fonction de leurs paramètres d'entraînement (les textes dont elles ont été nourries durant leur "entraînement").

ChatGPT’s summary contained mostly things that can be found in the full text. There was the occasional incorrectness of course. [...]

But the main proposal — that Council of Stakeholders — which takes up about 25% of the main text of the paper, is not mentioned in ChatGPT’s summary at all. Instead, that concrete suggestion becomes a few empty sentences. And that was true for a few other essential elements of the paper. In other words: the summary makes a good first impression, though not very concrete in terms of proposals, but reading the summary alone, you will not be aware that the paper actually has a a set of very concrete proposals and options, most of which is missing in ChatGPT’s summary.

Classé dans: 

Avec le boom de l'IA et sans aucune mesure prise pour limiter cette pollution, ce chiffre pourrait atteindre 2,5 millions de tonnes en 2030, selon les chercheurs, soit l'équivalent de 13,3 milliards de smartphones jetés.

Par ailleurs, les serveurs informatiques et les puces nécessaires à l'IA impliquent l'utilisation de métaux rares, dont l'extraction intensive, notamment en Afrique, repose sur des procédés polluants, comme l'a noté l'Agence de la transition écologique (Ademe) en France.

Le premier ministre a évoqué, lundi 27 janvier sur LCI, le « sentiment de submersion » généré par l’immigration. Des propos qu’il a réitérés et assumés le lendemain, au sein de l’Assemblée nationale, indignant la gauche. Pourtant, les chiffres et les études sur le sujet démentent clairement cette idée. Entretien avec Tania Racho, spécialiste des questions relatives aux droits fondamentaux.

As many people reading this know, last month I retired from my position as an opinion writer at the New York Times—a job I had done for 25 years. Despite the encomiums issued by the Times, it was not a happy departure. If you check out my Substack, you will see that I have by no means run out of energy or topics to write about. But from my perspective, the nature of my relationship with the Times had degenerated to a point where I couldn’t stay.

Classé dans: 

Aucune technologie n’est neutre ni inéluctable. Chacune se déploie dans un certain contexte économique et politique qui oriente les choix. Cela a toujours été le cas pour le numérique, depuis le début. L’extrême concentration d’acteurs et de moyens qui préside au déploiement des IAs génératives devrait aider à prendre conscience de cet état de fait. L’annonce récente de 500 milliards de dollars à consacrer au sujet donne la (dé)mesure de la chose. Je ne détaillerai pas les courants politiques et philosophiques qui circulent parmi les promoteurs des IAs. Certains acteurs affirment croire à l’avénement des IAs générales, comme résultat inéluctable de l’accumulation de moyens et de ressources. Que l’on fasse miroiter ces IAs capables de sauver le monde, ou qu’au contraire on annonce l’apocalypse, leur prise de pouvoir et la fin de l’humanité, on participe à détourner l’attention des dégâts déjà bien présents ici et maintenant.

Certes, sur la même période, les groupes du CAC 40 ont créé de l’emploi au niveau mondial, + 9% sur quatre ans. Des créations qui sont essentiellement le fait de quelques groupes dans les services à distance (Capgemini, Teleperformance) et le BTP.

En France cependant, les groupes du CAC 40 qui publient des chiffres à ce sujet ont très légèrement réduit leurs effectifs (-0,1%). Plusieurs piliers du CAC 40, dont Michelin et ArcelorMittal, ont annoncé de nouvelles suppressions d’emploi en 2024 qui vont toucher notamment la France.

Mais le « train du futur » s’est révélé un gouffre financier n’aboutissant qu’à des technologies inadaptées. Ses tubes requièrent une infrastructure linéaire pour conserver la vitesse promise. Ce qui impose d’artificialiser un tracé équivalent à une autoroute sur des centaines de kilomètres et à creuser des passages dans les zones dénivelées. Un fonctionnement plus adapté aux grands espaces américains qu’aux vallons entre Saint-Étienne et Lyon. Le maintien sous vide de ses tubes nécessite également une énergie considérable.

« Non seulement c’est un désastre environnemental, mais cette organisation de l’espace qui relie les métropoles les unes aux autres participe à la désertification des villes moyennes et des campagnes », fustige Jean-Louis Pagès, conseiller régional écologiste de Nouvelle-Aquitaine. La vitesse promise génère aussi des problèmes de confort et surtout de sécurité en cas de dépressurisation.

Bluesky promet de se transformer un jour en un réseau décentralisé, mais tout le monde n'est pas convaincu.

Once the Blue Sky folks decide that interoperability is bad actually, they can just unilaterally turn it off. Remember when Google and Facebook strangled Jabber / XMPP by just deciding "Nah, we're not gonna federate any more"? Pepperidge Farms remembers.

Les scientifiques sont clairs : le dérèglement du climat menace l'Humanité à moyen terme et pourtant les politiques climatiques sont encore extrêmement timides. D'où cette question : pourquoi cette inaction climatique ? Pourquoi l'alerte des scientifiques n'est-elle pas entendue ? Pour y répondre nous recevons Albert Moukheiber, docteur en neurosciences, psychologue clinicien et auteur neuromania (chez Allary éditions).

There is increasing disparity in the world today as an "aristocratic oligarchy" is amassing wealth at unforeseen levels, a report published by development organization Oxfam said.

Published ahead of the World Economic Forum (WEF) in Davos, the report titled "Takers Not Makers" said billionaire wealth soared by $2 trillion (€1.94 trillion) in 2024, growing three times faster than the previous year.

janvier

L’alliance entre Donald Trump et des patrons de plateforme sociale, tels Elon Musk ou Mark Zuckerberg, représente une menace à l’échelle mondiale sur le libre accès à une information fiable. « Le Monde » fait donc le choix d’interrompre le partage de ses contenus sur X et de redoubler de vigilance sur des plateformes comme TikTok et sur celles de Meta.

Selon un sondage réalisé par YouGov pour Le HuffPost, 70 % des Français sont favorables à la création d’un impôt sur les ménages les plus riches pour financer la baisse du déficit public. Un score en légère hausse en comparaison du mois de septembre, lorsque 67 % des sondés se disaient favorables à cette option. Détail intéressant, l’idée séduit l’ensemble du spectre politique, y compris chez les sympathisants de formations politiques hostiles à toute hausse de fiscalité chez les plus riches. Ainsi, 73 % des électeurs LR se disent favorables à cette idée, tandis que ceux de Renaissance l’approuvent à 75 %.

Entretien avec Renaud Chaput, responsable technique de Mastodon et l’un des francophones du projet (créé par un Allemand), qui réunit aujourd’hui des contributeurs et des utilisateurs à travers le monde.

We have arrived at an obscene inequality crisis, in which wealth is concentrated in the hands of a powerful few, at the cost of crippling hardship, precarity, and compromised well-being for the many. When a single billionaire can accumulate more money in 10 seconds than their employees make in one year, while workers struggle to meet the basic cost of rent and medicine, then yes, every billionaire really is a policy failure. Here’s why.

Les grands enjeux pour wikipédia sont tout autres que ces querelles « woke ou pas woke ». Les préoccupations portent plutôt sur les questions de fiabilité de l’information, avec les multiples campagnes de désinformation, ou encore l’utilisation de l’IA, qui ciblent Wikipédia, mais aussi et surtout les sources utilisées pour écrire des articles sur Wikipédia. En effet, la règle de base est que Wikipédia ne peut être qu’une synthèse du savoir existant, et n’est en aucun cas un lieu de production de savoir inédit. Donc si les sources utilisées sont « corrompues », cela se retrouvera nécessairement sur Wikipédia.

Note : Comme toujours lorsqu'il est question de "wokisme", il est important de se souvenir que ce terme n'a pas de définition précise et est essentiellement une invention des milieux réactionnaires pour attaquer les mouvements humanistes ou progressistes sans avoir besoin d'argumenter, en donnant l'impression de s'attaquer à un phénomène "inquiétant" ou "excessif", et sans jamais avoir besoin d'exprimer clairement les valeurs qu'ils poussent réellement. C'est un épouvantail et une arme de manipulation rhétorique, absolument pas un fait objectif.

Quand le Figaro accuse Wikipédia d'être "woke", la première réponse devrait être une déconstruction de cette affirmation et des valeurs qui la sous-tendent; mettre à jour la vision du monde et le projet politique qui se cachent derrière cette accusation, et réfuter sa légitimité même.

The stars seem to have aligned in Bluesky's favour. But the fundamental tension here isn't about politics - it's about incentives. Venture capital doesn't deploy funding out of altruism or commitment to digital democracy. The standard VC playbook demands exponential growth followed by monetization. Bluesky claims it "doesn't want to rely on advertising," but then what? Subscription models? Premium features? The moment you take VC money, you're signing up for their endgame.

Facebook built up its Trust & Safety program after the 2016 election debacle. The company’s name was getting dragged through the mud. There was the Cambridge Analytica scandal and the misinformation factories and the political ads paid for in Rubles. Mark Zuckerberg announced he was taking responsibility. It wouldn’t happen again.

He didn’t like it, though. And he didn’t mean it. Zuckerberg’s commitment to Trust & Safety was as deep as Exxon’s commitment to combating the climate crisis. He’ll only commit resources when it seems like he has to.

Despite these ugly attitudes from Trump and his supporters, in the past few months, there's been a deluge of pundits expressing confusion and outrage at straight women who conclude that it's better to be single than waste your one precious life dating — much less marrying — conservative men. Trump's running mate, Sen. JD Vance of Ohio, famously and repeatedly insisted that such women are "miserable cat ladies," even though it's self-evident that cats make better company than MAGA men. Even the Washington Post editorial board got involved, calling on women to "compromise" by marrying Trump voters. 

In 2024, women increasingly responded to these pressures with a "no thank you," though often phrased less politely. After Trump won the election, there was even a spike in interest in the South Korean "4b" movement, where women quit dating, marrying, or having children with men. In truth, this idea was more aspirational than realistic, but the discourse mattered nonetheless. It created space for women to ask the question: Why should they sacrifice their happiness to save the institution of heterosexuality?

This is not to downplay the extent to which Trump is grifting his devotees and those crypto traders looking to make a buck on memecoin speculation. But it is important that we accurately report on his cons and do not contribute to misleading crypto hype for the sake of large numbers.

You don’t have to be a cynic to see a flywheel effect: Crypto has become a meaningful political constituency not because its technology has broad, undeniable utility, but because it has made certain people extremely wealthy, which has attracted a great deal of attention and interest. The industry courts politicians with its wealth, and politicians pander for donations by making promises. Ultimately, the pro-crypto candidate wins, and the price of bitcoin surges, making many of these same people richer and thus able to exert more influence.

[...]

Crypto’s future is uncertain, but its legacy, at least in the short term, seems clearer than it did before November 5. It turns out that cryptocurrencies do have a very concrete use case. They are a technology that has latched on to, and then helped build, a culture that celebrates greed and speculation as virtues just as it embraces volatility. The only predictable thing about crypto seems to be its penchant for attracting and enriching a patchwork of individuals with qualities including, but not limited to, an appetite for risk, an overwhelming optimism about the benefits of technology, or a healthy distrust of institutions. In these ways, crypto is a perfect fit for the turbulence and distrust of the 2020s, as well as the nihilism and corruption of the Trump era.

Musk, the world's richest person, spent more than a quarter of a billion dollars helping Trump get elected president in November. Removing the crash-disclosure provision would particularly benefit Tesla, which has reported most of the crashes – more than 1,500 – to federal safety regulators under the program. Tesla has been targeted in National Highway Traffic Safety Administration (NHTSA) investigations, including three stemming from the data.

Soupçonné d'avoir tué Brian Thompson, patron d'une assurance privée aux États-Unis, et aujourd'hui devant la justice de New-York, Luigi Mangione a été présenté comme un "bad boy". Pourtant, en ligne, il a été adulé. Une fascination que les journaux n'ont pas cherché à analyser. Entre tentatives ratées de profilage numérique du mis en cause, refus de politiser son acte et de nommer la violence du système de santé privée, mais aussi, les cris d'orfraie moralistes sur la sacralité de la vie humaine, la presse étasunienne a fait l'étalage de sa déconnexion sociale.

There are indeed theoretical approaches to brain function, including to the most mysterious thing the human brain can do – produce consciousness. But none of these frameworks are widely accepted, for none has yet passed the decisive test of experimental investigation. It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain

Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software.

There are a number of theories why gamers have turned their backs on realism. One hypothesis is that players got tired of seeing the same artistic style in major releases. Others speculate that cinematic graphics require so much time and money to develop that gameplay suffers, leaving customers with a hollow experience.

Classé dans: 

When someone tries to sell their tech (step 1 in the chain of reasoning) with massively large claims (step 3 in the chain) look at whether step 2 actually exists in reality. Because if it doesn’t that’d not “disruptive innovation” or “a breakthrough” or “a unicorn”. That is bullshit. It’s not just a waste of your time, it’s a way to infantilize you.

Classé dans: 

AI can turn some impressive party tricks, but it's unsuited for solving serious problems in the real world. This is true of predictive AI, whose correlations are data-driven conspiracy theories, and of large language models like ChatGPT, whose plausible waffle is always trying to pull free of the facts. The real issue is not only that AI doesn't work as advertised, but the impact it will have before this becomes painfully obvious to everyone. AI is being used as form of 'shock doctrine', where the sense of urgency generated by an allegedly world-transforming technology is used as an opportunity to transform social systems without democratic debate.

[...]

Real AI isn't sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form. Like Thatcher herself, real world AI boosts bureaucratic cruelty towards the most vulnerable. Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn't provide insights as it's just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences. The logics of ranking and superiority are buried deep in the make up of artificial intelligence; married to populist politics, it becomes another vector for deciding who is disposable.

Here’s a sad statistic for you: In the United States, we have a whopping 1.4 million people employed with the job of DENYING HEALTH CARE, vs only 1 million doctors in the entire country! That’s all you need to know about America. We pay more people to deny care than to give it. 1 million doctors to give care, 1.4 million brutes in cubicles doing their best to stop doctors from giving that care. If the purpose of “health care” is to keep people alive, then what is the purpose of DENYING PEOPLE HEALTH CARE? Other than to kill them? I definitely condemn that kind of murder. And in fact, I already did. In 2007, I made a film – SICKO – about America’s bloodthirsty, profit-driven and murderous health insurance system. It was nominated for an Oscar. It’s the second-largest grossing film of my career (after Fahrenheit 9/11). And over the past 15 years, millions upon millions of people have watched it including, apparently, Luigi Mangione.

Depuis son arrestation, les médias peinent à décrire Luigi Mangione autrement que comme un garçon sans histoire. Le fait que quelqu’un comme lui puisse se transformer en tueur de PDG de sang froid a de quoi effrayer bien des puissants, puisque son geste semblait impossible à prévenir. Et c’est donc cette banalité du coupable présumé qui l’a transformé d’ores et déjà en icône de la culture populaire. Cet engouement n’est pas neutre politiquement puisque ce  meurtre a mis les assurances privées et leurs pratiques au cœur du débat public aux Etats-Unis.

One of the first pieces of publicized evidence in the wake of the killing was that three 9mm cartridges left at the scene were found to have been labeled with three phrases: “deny,” “defend,” and “depose.” It was speculated the first two phrases referred to the oft-cited practice of health insurance companies to deny coverage to clients and defend these decisions with legalistic trickery. “Depose,” of course, has multiple meanings, but in this context just two: one might depose a health care company CEO in court, and one might also depose a figure of terrific unaccountable authority, such as a king or tyrant.

We’re drowning in decisions, but starving for meaning. We’ve developed sophisticated frameworks for everything from A/B testing website buttons to optimizing YouTube thumbnails, but we don’t have robust (or anything approaching robust) methods for distinguishing between decisions that matter and those that don’t in our personal lives.

Classé dans: 

Le protocole AT a été pensé pour être décentralisé. Dans la pratique, Bluesky ne l’est pas. La possibilité de créer facilement un PDS n’est qu’un petit élément parmi d’autres. Même si l’on peut créer des relais, leur mise en œuvre est complexe et sans doute bien trop onéreuse en stockage et bande passante pour être intéressante.

On ne peut pas dire que Bluesky soit actuellement décentralisé, et encore moins fédéré. Il y a bien un centre, et il est géré par l’entreprise Bluesky. Sans son relai, rien ne fonctionne. Chaque serveur de données personnelles ne sert ainsi que comme petit réservoir pour les informations d’une personne, incapable de fonctionner par lui-même.

The only good news about space colonies designed by Elon Musk and Jeff Bezos is that they aren’t going to happen. Musk will not be launching a million people to Mars in 15 years, not even close (although I do see some fantasy synergy between Musk and Trump’s plan to deport millions of people on day one of his presidency — maybe he’s dreaming of filling his Martian city with Puerto Ricans, Haitians, and South American gang-bangers). Bezos is not going to build an office park in Earth orbit, not as long as he can bulldoze farm land for cheap and assemble giant concrete boxes here on Earth. Those are two professional liars. Don’t believe anything they promise, because all they really promise is controlling you to their benefit.

I have — since March — expressed great dismay about the credulousness of the media in their acceptance of the "inevitable" ways in which generative AI will change society, despite a lack of any truly meaningful product that might justify an environmentally-destructive industry led by a company that burns more than $5 billion a year and big tech firms spending $200 billion on data centers for products that people don't want.

The reason I'm repeating myself is that it's important to note how obvious the problems with generative AI have been, and for how long.

Classé dans: 

Dans les lieux de travail, l’IA apparaît souvent de manière anodine, en étant peu à peu intégrée à des applications de travail existantes. Dans la pratique, l’automatisation remplace rarement les travailleurs, elle automatise très partiellement certaines tâches spécifiques et surtout reconfigure la façon dont les humains travaillent aux côtés des machines. Les résultats de l’IA générative nécessitent souvent beaucoup de retravail pour être exploitées. Des rédacteurs sont désormais embauchés pour réhumaniser les textes synthétiques, mais en étant moins payé que s’ils l’avaient écrit par eux-même sous prétexte qu’ils apportent moins de valeur. Les chatbots ressemblent de plus en plus aux véhicules autonomes, avec leurs centres de commandes à distance où des humains peuvent reprendre les commandes si nécessaire, et invisibilisent les effectifs pléthoriques qui leur apprennent à parler et corrigent leurs discours. La dévalorisation des humains derrière l’IA occultent bien souvent l’étendue des collaborations nécessaires à leur bon fonctionnement.

Dans [son] livre, le sociologue [Antonio Casilli] démontre combien ce travail invisible, qui est lié aux plateformes et aux datas, reproduit une dichotomie nord-sud. Le travail du clic est un job précaire réalisé par des millions de personnes et que l’on retrouve partout sur la planète. Parfois, ce sont des freelances aux États-Unis, des Philippins dans un cybercafé ou des milliers de travailleurs africains dans des fermes du clic. Il y a néanmoins des différences très importantes dans les conditions de travail. Tout en bas de l’échelle, on retrouve les travailleurs africains. Finalement, c’est une nouvelle dimension néocoloniale, loin de l’image épurée de la Silicon Valley. Le fait que les Big Tech invisibilisent ce travail sous-entend que celui-ci n’est pas très important – pourtant, il est essentiel. L’un des travailleurs que j’ai rencontré au Kenya m’a dit : « Il n’y a pas d’intelligence artificielle, seulement l’intelligence africaine. » Ça résume assez bien les choses. 

Artificial intelligence is a threat to educational institutions – as deeply flawed as these may be – not because it's some incredible technological achievement that's more powerful that the human mind (hahahahahahahahahahahahahahaha); but because it is the manifestation of a series of reactionary political beliefs. AI is inextricably bound up in ideologies and practices that seek to undermine unions, exploit labor, re-inscribe racial and economic hierarchies, and centralize control – of knowledge and knowing ("intelligence" in all its various military and eugenicist histories) – in the hands of a few giant technology corporations. (Folks, that's fascism.)

Classé dans: 

What should we call a society in which a prominent Conservative party politician – that is to say, not a fascist oddball or some random talking head – calls for "violence against irregular migration", i.e. for shooting migrants at the EU's borders; in which up to 30% believe that it is perfectly ok to vote for a party that is "in parts certified right-wing extremist", for which, read: fascist; in which "climate protection" means protection against climate activists, and "climate adaptation" does not mean building higher dikes, but building higher walls against migrants; in which migrants who, for completely incomprehensible reasons, want to migrate to the parts of the world that are still inhabitable (whereby of course the vast majority migrate to their home or neighbouring countries), are demonised as "criminal gangs of human traffickers", in order to legitimise a "war on migration/migrants"? A society that Anna Becker sums up brutally and but aptly on Bluesky: "First we exploit countries, then we destroy a large part of global livelihoods, and instead of saving people from the consequences of our actions, we seal ourselves off by force and let them die in the Mediterranean. And the voters love it."

Precisely: an asshole-society.

When the world’s on fire, reporting “some say it’s not that warm” isn’t brave — it’s complicit. If journalists can’t bring themselves to point at a full-blown authoritarian and say “this is some bad shit,” then maybe they should switch to weather forecasting, where at least the stakes are lower.

The time for milquetoast coverage is over. Either tell the truth, naked and ugly, or don’t act surprised when your freedom to write anything at all goes up in smoke.

Mastodon