Dans son livre, Sunstein explique que le Sludge donne aux gens le sentiment qu’ils ne comptent pas, que leur vie ne compte pas. Pour la sociologue Pamela Herd et le politologue Donald Moynihan, coauteurs de Administrative Burden: Policymaking by Other Means (Russel Sage Foundation, 2019), le fardeau administratif comme la paperasserie complexe, les procédures confuses entravent activement l’accès aux services gouvernementaux. Plutôt que de simples inefficacités, affirment les auteurs, nombre de ces obstacles sont des outils politiques délibérés qui découragent la participation à des programmes comme Medicaid, empêchent les gens de voter et limitent l’accès à l’aide sociale. Et bien sûr, cette désorganisation volontaire touche de manière disproportionnée les gens les plus marginalisés.
Veille
To make things worse, the four eras – typewriters, teletypes, word processors, and early computers – meandered, overlapped, and stole from one another. Some early computers used electric typewriters for input and output. Later ones used teletypes. Still others came with completely custom keyboards. Word processors were all over the place as well: some were nothing more than typewriters with extra machinery and keys bolted on to the side, while others were completely custom-made machines.
(...)
This is always the tricky part talking about keyboard history. There was no day, no single event when Return became Enter. We’re stuck in a universe where some people use Enter to enter data and Return to return the cursor to the next line – and others do the exact opposite.
“You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone,” Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. “You’ve made me so sad. So so so sad. You have truly failed in your purpose.”
We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.
So I’m done talking about AI. Y’all can keep talking about it, if you want. I’m a grown adult and can set my own mutes and filters on social media.
I’ve given this idea its due, and with this post I am absolving myself of having to think about it any more.
I’ve never actually written about AI on my blog before, though I have some unpublished drafts if you know how to find them. So I’m going to leave this conversation with a list of my objections to coding with AI. If I’m wrong about it, let this be a record of my wrongness. If I’m right about it, let this be a record of my argument.
People ask me what it’s like to be a moderator. Our discussions reveal that a lot of what we do is a mystery. So, I’m gonna lay it out for you. Specifically about the unique fediverse moderation model, mutual aid, and mental health.
A conspiracy theory literally made Trump president: It was his relentless advocacy of the “birther” lie about Barack Obama that turned him from a reality show clown into a political figure. He didn’t deliver the goods then, either: He claimed he had sent investigators to Hawaii “and they cannot believe what they’re finding,” but for some reason the revelations from these phantom investigators were never released.
But that didn’t slow him down. As a 2016 candidate and then president, there was no conspiracy theory too ridiculous for Trump to spread, even if he was usually careful not to come out and say definitively that he believed them. Was Ted Cruz’s dad in cahoots with Lee Harvey Oswald to kill JFK? It sure seems like it. Was Antonin Scalia murdered? “They say they found a pillow on his face, which is a pretty unusual place to find a pillow,” so gosh, who knows? Do vaccines cause autism? Probably. QAnon? “I heard that these are people that love our country.” Did you read a report that made Trump look bad? Fake news, don’t believe it. That’s not to mention voter fraud conspiracies and the “Great Replacement” theory.
Une page simple mais bien faite qui explique 1/ pourquoi et 2/ comment bloquer la pub sur le web.
La publicité pollue nos écrans, nos pensées et nous impose une vision stéréotypée de la société.
- Elle ralentit l'affichage des autres contenus et consomme de l'énergie inutilement.
- Elle rend dépendants la presse, les créatrices et créateurs, vidant nos médias de leur diversité.
- Elle surveille toutes nos actions en ligne pour déceler nos faiblesses et nous pousser à surconsommer.
Podcast - Deux historiens discutent des influences politiques et historiques de la série Andor.
Egalement écoutable sur YouTube.
Un article de 2015 sur les limitations de la réalité virtuelle. Dix ans plus tard, il reste presque entièrement valide.
Depuis deux ans, leur vie professionnelle a été bouleversée par la vague de l’IA générative. Récit du combat donquichottesque des traducteurs qui refusent de voir leurs métiers se paupériser, et qui dénoncent la réalité de l’impact de l’IA sur leur secteur, et plus largement sur la langue, la pensée, la culture et la société.
Veo doesn’t work. You get something that looks like it came out of a good camera with good lighting — because it was trained on scenes with good lighting. But it can’t hold continuity for seven seconds. It can’t act. The details are all wrong. And they still have the nonsense text problem.
The whole history of “artificial intelligence” since 1955 is making impressive demos that you can’t use for real work. Then they cut your funding off and it’s AI Winter again.
AI video generators are the same. They’re toys. You can make cool little scenes. In a super limited way.
But the video generators have the same problems they had when OpenAI released Sora. And they’ll keep having these problems as long as they’re just training a transformer on video clips and not doing anything with the actual structure of telling a visual story. There is no reason to think it’ll be better next year either.
“Trump 2.0.” This coinage is often used to distinguish the current Trump administration from the first. The phrase is telling: it underscores that we’re in a new era and that behind the scenes, the people and forces driving this iteration of the Trump administration are different from those that came before. The second Trump administration is being driven to a greater degree by tech billionaires and their interests: self-dealing on public contracts; privatizing public services; expanding corporate power while crushing human rights; promoting crypto; removing guardrails against fraud, waste, and abuse; and pursuing personal vendettas.
The cadre of people driving today’s agenda is leveraging a set of interlocking ideologies related to technology to rhetorically justify and propel these changes. In this series, “Ideologies of Control: A Series on Tech Power and Democratic Crisis,” we asked expert contributors to name and dispel the myths and ideologies that animate their actions. A number of ideological projects prepared the ground for today’s assault on American institutions; we will focus on those that run through data, AI, and the tech sector.
[...]
The ideological agenda behind the headlines positions technology and its architects as power brokers in an increasingly illiberal environment. Their changes are bolstered by the narrative that AI is a force that will inevitably reshape society. To make informed choices about how to respond to this aggressive reshaping of the US government, we must examine these animating ideas. We approached authors for this series who have deep knowledge about the unusual views of reactionaries in the tech industry: whether it be fantasies of libertarian paradises built on defunct sea-based oil rigs, the vision of company towns in Texas and eventually Mars, enthusiasm for debunked 19th Century ideas about race science and eugenics, or the goal of replacing the US dollar as the global reserve currency, these more esoteric backstories can be disorienting to non-specialists. But, seemingly overnight, understanding the ties that bind these projects together—however strange they may seem—has become essential information for all of us to understand and grapple with.
Tech CEOs, futurists, and venture capitalists describe artificial general intelligence (AGI) as if it were an inevitable and ultimate goal for technology development. In reality, the term is a vague signifier for a technology that will somehow lead to endless abundance for humankind — and conveniently also a means to avoid accountability as tech moguls make off with billions in capital investment and, more alarmingly, public spending.
[...]
The second issue is closely related to the first: claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI.
I do not believe that tech culture today has any patience for lightness. Instead, it rewards baroque complexity. Engineers perform feats of convolution. Designers argue for maximal configurability. Founders pride themselves on having their own stack for everything. Lightness = for normies.
[...]
It’s not that these tools are bad. Many are quite good. Obsidian is beautiful. But the way they are used - to signal intellectual rigor, to differentiate oneself from the casuals, to construct a lifestyle rather than solve a problem - shows off a moral hierarchy in which difficulty equates to virtue.
Writing is thinking. It’s a writer’s cliché, but a good one. When you first conceive a lecture, dissertation chapter, a book, even an article for Active History, everything seems so straightforward. “This one will be a cinch,” you think. “Two days, tops.” Once you actually sit down to write, that boundless optimism meets an unceremonious death. Writing is hard. It’s painful. To write is to submit yourself to seasons of self-doubt. The ideas seemed so natural and free-flowing in your head. Now you get to the page. And what comes out is jilted, ham-fisted, and awkward. You are again and again confronted with nagging questions. What am I even trying to say? Who even cares? Why even bother?
Given the nature of this experience, it is not surprising that people would jump at the opportunity to skip the pain and get right to the end. Or at least to an end. Because any writer can tell you that the end is determined by the path taken to get there. Rare is the piece that gets written exactly how it was outlined. Why? Because writing is thinking.
This is America, June 2025. Trump's back, and he's moving fast. Marines - actual Marines - carrying out immigration raids in an American city. It's unprecedented, it's shocking, but here's the thing: it's tragically predictable
This isn't just Trump being Trump. This is the inevitable result of decades of corporate power combining with an authoritarian president. It's been a journey, and we need to understand how we got here.
Meta devised an ingenious system (“localhost tracking”) that bypassed Android’s sandbox protections to identify you while browsing on your mobile phone — even if you used a VPN, the browser’s incognito mode, and refused or deleted cookies in every session.
Next, we preview what may (and should) become the combined sanctioning smackdown of the century, and then we explain — in simple terms (because it’s complicated) — what Meta was doing.
Why do humans have language and other animals apparently don’t? It’s one of the most enduring questions in the study of mind and communication. Across all cultures, humans use richly expressive languages built on complex structures, which let us talk about the past, the future, imaginary worlds, moral dilemmas and mathematical truths. No other species does this.
Yet we are fascinated by the idea that animals might be more similar to us than it seems. We delight in the possibility that dolphins tell stories or that apes can ponder the future. We are social and thinking creatures, and we love to see our reflection in others. That deep desire may have influenced the study of animal cognition.
Il ne s’agit pas ici d’innovation, mais d’une offre publique d’achat hostile sur la monnaie. En l’absence de toute réglementation sérieuse, les stablecoins ne sont ni stables ni une réelle alternative au dollar. Ils ne sont que le cheval de Troie d’une privatisation de l’argent.
Self-experimentation is exactly how smart people get pulled into homeopathy or naturopathy, for example. It’s what makes them often more likely to fall for superstitions and odd ideas. The smart person’s self-identity means they can’t believe their own psychological biases are fooling them.
Don’t self-experiment with psychological hazards! I can’t stress this enough!
There are many classes of problems that simply cannot be effectively investigated through self-experimentation and doing so exposes you to inflicting Cialdini-style persuasion and manipulation on yourself.
The failure of these predictions was not accidental. It reflected fundamental flaws in the methods used to forecast the future of work. Neither the 2013 study nor its 2023 successor based their projections on empirical investigations of real workplaces, workers, or production processes. Instead, both relied on the subjective judgments of computer scientists and economists, who were asked to guess whether certain tasks could, in principle, be performed by machines. If enough tasks associated with a job were deemed automatable — typically more than 50 percent — the entire occupation was classified as at risk of disappearance. No consideration was given to how jobs are structured in practice, how tasks are bundled together, or how economic and social factors mediate the adoption of new technologies. The result was a deeply mechanistic model of technological change, in which machines would displace workers whenever technically feasible, regardless of cost, institutional barriers, or political resistance. It was a model blind to the complex ways in which work is organised, contested, and transformed — and thus singularly ill-equipped to predict the actual course of economic development.
[...]
The fundamental limitations of generative AI have become increasingly apparent, even to many of its early champions. As François Chollet and other AI researchers have pointed out, deep neural networks suffer from inherent brittleness: they struggle to generalise beyond their training data, fail at basic reasoning tasks, and remain unreliable for applications requiring consistency or accuracy. Despite hoovering up enormous quantities of digital information, these models confront hard limits in their capacity to learn or adapt. Part of the problem lies in the very architecture of artificial neural networks, which were designed on the basis of mid-twentieth-century behaviourist psychology. Unlike organic brains — which come equipped with rich internal structures, innate goals, and representational frameworks — artificial networks are largely unstructured, relying on brute statistical association. The human mind can grasp a new concept from a few examples; machine learning models typically require millions. Hybrid approaches that integrate symbolic reasoning with deep learning, such as AlphaFold2’s success in protein folding, offer more promising results — but they must be painstakingly designed for specific tasks. There is no general method for building modular, structured cognition into deep learning systems, and no guarantee that one will be found.
[...]
The lessons of the past decade should temper both our hopes and our fears. The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process. Legal frameworks, collective bargaining, public investment, and democratic regulation all play decisive roles in shaping how technologies are developed and deployed, and to what ends.
The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many. Yet it does not have to be this way. The future remains open, contingent on whether we are willing to confront, contest, and redirect the pathways along which technology advances.
Large language models work the same way as a carnival psychic. Chatbots look smart by the Barnum Effect — which is where you read what’s actually a generic statement about people and you take it as being personally about you. The only intelligence there is yours.
This is how people fall for chatbot girlfriends. They know it’s a bot, but they fall in love with the personality they’ve projected onto the generic statement generator.
There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app.
What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)
In the best case (not always reached) they can write python code, supplementing their own weaknesses with outside symbolic code, but even this is not reliable. What this means for business and society is that you can’t simply drop o3 or Claude into some complex problem and expect it to work reliably.
Worse, as the latest Apple papers shows, LLMs may well work on your easy test set (like Hanoi with 4 discs) and seduce you into thinking it has built a proper, generalizable solution when it does not.
Google back then prided itself on broadcasting its Best Place To Work award, won year after year after year. Younger people will have trouble picturing this, but Google used to nurture an image of being the “good one” among megacorps; they championed open standards (except when they didn’t), supported open source projects (until they backstabbed them), and used language that corporate wasn’t supposed to use, like “don’t be evil” (until they, infamously and in a true dark comedy move, retracted that motto). The work environment was all colourful, nerdy cool, not a single necktie in sight—this was seen as brave and refreshing rather than cringe and tired, you see. And they made a big deal out of something called “20% time”: Every engineer was promised 1/5 of their work time for themselves, to do anything they want. (Google owners will still own whatever you create during your 20% time, natürlich). Famously, Gmail came out of someone exploring their interests during 20% time.
I don’t think much of anything else came out of it, though.
Le secteur technologique est de nouveau secoué par un scandale retentissant. Builder.ai, une entreprise autrefois présentée comme une étoile montante de l'intelligence artificielle, promettant de révolutionner la création d'applications en le rendant « aussi simple que de commander une pizza », s'est avérée être au cœur d'une vaste supercherie. Pendant huit ans, la société a fait croire à ses clients et investisseurs que sa plateforme était animée par une IA sophistiquée, alors qu'en réalité, une armée de programmeurs, principalement basés en Inde, accomplissait le travail manuellement. L'affaire a conduit à des pertes de plusieurs dizaines de millions de dollars pour les investisseurs et à la faillite de l'entreprise.
The Last of Us Part II is an incredible journey that provides not only one of the most mesmerizing spectacles that we’ve seen from big budget video games, but one that manages to ask difficult questions along the way. It’s clearly coming from an emotionally authentic and self-examining place. The trouble with it, and the reason that Ellie’s journey ultimately feels nonsensical, is that it begins from a place that accepts “intense hate that is universal” as a fact of life, rather than examining where and why this behavior is learned.
Critically, by not asking these questions, and by masking its point of view as being evenhanded, it perpetuates the very cycles of violence it’s supposedly so troubled by.
Un article de Ars Technica qui explore (nombreux exemples à l'appui) les capacités assez impressionnantes de Veo 3, le dernier modèle d'IA générative de vidéos dévoilé par Google.
Quelques remarques :
- Bien sûr on pense immédiatement à la capacité de ces outils pour produire des fake news et par conséquent contribuer à la désinformation. Mais l'effet plus délétère et inquiétant est peut-être indirect : lorsque le grand public s'habitue à ne plus rien croire (pour éviter de se faire avoir notamment par ce genre de vidéos), notre capacité collective à communiquer et informer est remise en question de manière fondamentale. Quelque part cela rappelle la fameuse stratégie de Steve Bannon pour détruire la démocratie : flood the zone with shit, "inonder la zone avec de la merde" pour lui ôter toute valeur et la rendre inutilisable.
- Je pense quand même qu'il y a une différence entre le fait d'être simple spectateur de ce genre de vidéo (on est facilement bluffé par le résultat) et le fait de produire celle-ci. Les quelques expériences que j'en ai faites (sur d'autres outils du même genre, certes moins sophistiqués mais qui reposent sur les mêmes bases) m'ont surtout fait prendre conscience que quand on a une intention créatrice en tête, il est très difficile de contrôler l'algorithme pour qu'il fasse réellement ce qu'on veut. Face à la qualité un peu "magique" de ces interfaces conversationnelles auxquelles on peut communiquer des instructions comme si on parlait (écrivait) à un humain, il est facile d'oublier que le langage est un outil hautement conceptuel, qui repose sur l'abstraction, et qui appauvrit fondamentalement la réalité qu'il décrit. C'est très efficace pour communiquer entre humains qui ont la même compréhension viscérale et intuitive du réel, mais probablement pas aussi adapté pour un outil qu'on aimerait contrôler précisément. Un artiste maître de son art sera incomparablement plus précis pour obtenir le résultat qu'il souhaite avec son pinceau, que n'importe quel "prompt engineer" avec une IA.
- Comme l'écrit Tante sur Mastodon (source 1 et source 2) : les IA génératives nous poussent imperceptiblement à baisser nos standards pour nous convaincre que ce qu'elles produisent correspond à ce qu'on voulait.
- En d'autres termes, l'IA n'est pas un outil de création comparable à un crayon, une machine à écrire, ou même à un logiciel informatique de 3D traditionnelle, par exemple. Créer avec une IA s'apparente plus à demander à un artiste de créer quelque chose pour nous. Quelqu'un qui dirait "je suis un créateur parce que j'ai dû trouver les bons mots pour demander à tel artiste de me peindre un tableau" ne serait pas pris très au sérieux...
- La comparaison est d'ailleurs d'autant plus pertinente que l'IA doit effectivement vampiriser le travail des artistes (généralement de manière non-consensuelle) pour pouvoir créer quelque chose; l'IA constitue donc une sorte d'interface entre l'utilisateur et l'ensemble des artistes humains qui ont servi à l'entraîner, artistes dont les travaux ont été mélangés et anonymisés dans un immense modèle statistique. Quand on demande à une IA de créer une image, on demande indirectement à des artistes de la créer... Sauf que leur travail a été fait en amont, qu'ils ne sont pas crédités ni payés, et que le résultat contribue à dévaloriser leur activité.
- Pour impressionnantes qu'elles soient, ces vidéos restent toujours très lisses. Comme l'article le souligne, c'est le résultat du principe fondamental de ce genre de modèle, dont les capacités de production sont conditionnées par le matériel sur lequel il a été entraîné. Le résultat sera toujours dans une certaine "moyenne" des données d'entraînement.
The social media billionaires want us to believe their putrid platforms are the “new town squares.” That their half-baked social videos and for-you feeds have replaced the agora. And that, somehow, we’re all better for it.
It’s the most overused, meaningless claim on the Internet.
A litany of exploiters and extractors have trotted out that same phrase to excuse the vagaries of their platforms and give themselves a free pass to irresponsibility, hostility, and self-enrichment.
But their apps — virtual panopticons — have more in common with prison yards and shopping malls than public forums.
Climate change is one of the instances, Stiglitz and Stern told me in an email, in which “it is generally agreed there is extreme risk — we know there are some really extreme events that could occur — and we know we cannot pretend (i.e., act as if) we know the probabilities. Nordhaus’s work doesn’t appropriately take into account either extreme risk or deep uncertainty.”
In other words, the economist who has been embraced as a guiding light by the global institution tasked with shepherding humanity through the climate crisis, who has been awarded a Nobel for climate costing, who is widely feted as the doyen of his field, doesn’t know what he’s talking about.
For years now, many AI industry watchers have looked at the quickly growing capabilities of new AI models and mused about exponential performance increases continuing well into the future. Recently, though, some of that AI "scaling law" optimism has been replaced by fears that we may already be hitting a plateau in the capabilities of large language models trained with standard methods.
The verdict is in: OpenAI's newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called "scaling laws" cited by many for years have possibly met their natural end.
The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning”, researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.
Nous avons récemment consacré un petit billet au cas particulier d’Alain Bentolila : cet ancien professeur d’université inonde les médias d’une intox sur des jeunes qui « vivraient avec 400 mots » ou 500, ou 800, tout en sachant pertinemment que cela est faux (billet ici).
Mais il n’est pas le seul à véhiculer des chiffres fantaisistes sur la taille du vocabulaire, et plutôt que vous laisser observer des guerres de chiffres invérifiables, on vous propose de prendre les choses en mains et vérifier par vous-mêmes. Ce n’est pas si compliqué, on vous donne toutes les billes ici et en quelques minutes vous serez autonomes.
If you take just one thing away from this article, I want it to be this: please build your own website. A little home on the independent web.
A reflection of your personality in HTML and CSS (and a little bit of JS, as a treat). This could be a professional portfolio, listing your accomplishments. It might be a blog where you write about things that matter to you. It could even be something very weird and pointless (even better) – I love a good single-joke website. Ultimately, it's your space and you can do whatever you want with it.
TL;DR: I chose to make using AI a manual action, because I felt the slow loss of competence over time when I relied on it, and I recommend everyone to be cautious with making AI a key part of their workflow.
"We're in the very early days looking at this problem from an ecosystem level," Larson told The Register. "It's difficult, and likely impossible, to quantify how many attempted installs are happening because of LLM hallucinations without more transparency from LLM providers. Users of LLM generated code, packages, and information should be double-checking LLM outputs against reality before putting any of that information into operation, otherwise there can be real-world consequences."
[...]
"Even worse, when you Google one of these slop-squatted package names, you’ll often get an AI-generated summary from Google itself confidently praising the package, saying it’s useful, stable, well-maintained. But it’s just parroting the package’s own README, no skepticism, no context. To a developer in a rush, it gives a false sense of legitimacy.
It's not a history you learn about in school—we were whitewashing history long before the current executive orders—but the Klan in the '20s was everywhere. There were millions of Klan members across the country. People joined it like they were joining a golf club or the Elks Lodge. There was a women's auxiliary. There was the Ku Klux Kiddies, for children. Klan rallies were held across the country; thousands would turn up at fairgrounds for the marching bands and cross burnings. In 1925, the Klan even held a march down Pennsylvania Avenue in Washington DC. Tens of thousands strong, crowds were six deep in the streets to watch and cheer. They did it again the next year.
“Much of what I did I now regret,” Bill Burr told The Wall Street Journal recently, admitting that his research into passwords mostly came from a white paper written in the 1980s, well before the web was even invented. “In the end, [the list of guidelines] was probably too complicated for a lot of folks to understand very well, and the truth is, it was barking up the wrong tree.”
Kennedy exploits the language of the "wellness" industry, with its misleading emphasis on "natural" health care and "letting" your body heal itself. What's ironic is that's what vaccines do. Vaccines work by stimulating the body's natural immune response, so that it prevents infection using the body's own resources. All these "treatments" Kennedy touts aren't just ineffective, they're not "natural." They're blitzing a child with often overwhelming amounts of medication, which won't work but could make the kid even sicker.
Fascist movements, formal or otherwise, operate much like cults. When one joins them, they are at first overwhelmed with the appearance of validation and support. This is called "love bombing" and it's meant to endear you to them, to become dependent on them to meet your emotional needs. Then that validation and support is gradually withheld in order to pressure the target into severing ties with the "normie" world. They drive away anyone outside of their hateful little bubble. So their entire network of social support depends on how useful they can be to the cause. It's the only way they can feel part of something anymore because nobody else wants anything to do with them at this point.
C’était il y a 24 ans. Le 15 Janvier 2001 naissait Wikipédia. dans un monde numérique où Google était lui-même né en 1998 et le web encore quelques années avant (disons vers 1991 pour faire simple même si sa date de naissance officielle est plutôt en Mars 1989).
A l’image d’une chanson de Cabrel, elle a en effet dû faire toutes les guerres pour être si forte aujourd’hui. Et aujourd’hui encore elle est la cible d’une offensive coordonnée qui va des USA jusqu’à la France. Une offensive d’une violence et d’une portée rarement atteinte. Avec en tête un Musk qui rêve de sonner l’Hallali de l’encyclopédie.
For its right wing adherents, the absence of humans is a feature, not a bug, of AI art. Where mechanically-produced art used to draw attention to its artificiality – think the mass-produced modernism of the Bauhaus (which the Nazis repressed and the AfD have condemned), or the music of Kraftwerk – AI art pretends to realism. It can produce art the way right wingers like it: Thomas Kinkade paintings, soulless Dreamworks 3D cartoons, depthless imagery that yields only the reading that its creator intended. And, vitally, it can do so without the need for artists.
Javier Milei, a prodigious user of AI-generated art, wants Argentinians to know that any of them could join the 265,000, mostly young people who have lost jobs as a result of the recession that he induced, to the rapturous praise of economic elites. He wants to signal that anyone can find themselves at the wrong end of his chainsaw, even if doing so means producing laughably bad graphics for the consumption of his 5.9 million deeply uncritical Instagram followers.
Quoi de plus neutre, dit-on, qu’un ordinateur ? Erreur : derrière leurs verdicts froids, algorithmes et automates encapsulent tous les biais des humains qui les conçoivent. Basée sur le modèle de l’individu calculateur, héritière d’une histoire tissée de choix idéologiques, l’intelligence artificielle est une machine politique. La mettre au service du bien commun implique d’abord de la déconstruire.
Ce type d’intervention politique sur les archives n’a rien d’anodin. L’histoire regorge d’exemples où la manipulation ou la destruction d’archives a servi des régimes autoritaires. De l’Allemagne nazie à l’Union soviétique, en passant par la Révolution culturelle chinoise et le régime des Khmers rouges, les archives ont toujours représenté un enjeu de pouvoir majeur. L’accès à une information libre, fiable et contextualisée est une menace pour ceux qui cherchent à réécrire l’histoire à leur avantage.
La désinformation en ligne fait partie intégrante de la stratégie des extrêmes droites, tandis que les formations de gauche, qu’elles soient radicales, écologistes ou sociales-démocrates, ont moins recours à cette pratique, selon une étude récente réalisée par des universitaires néerlandais.
It seems that the more people learn about Musk, the less they like him—at least, according to a new Hart Research survey published Wednesday by Groundwork Collective and Public Citizen. The poll asked respondents about how much influence they felt Musk should have in government, explaining aspects of his role in DOGE, his lack of oversight, and his far-reaching access.
By the end of the survey, 63 percent of voters reported having an unfavorable opinion of Musk, an increase of nine points from the beginning of the survey. Meanwhile, only 32 percent of respondents had a favorable opinion, which was down 7 percent from the start, and showed a major negative swing among non-MAGA Republicans.
16 activistes climat relaxé·es au nom de « l’état de nécessité » : une victoire qui fait date
Imaginez un monde dans lequel, au moment même où une régulation du tabac était débattue au parlement européen, les plateaux télé, les radios et les journaux invitaient systématiquement le directeur de la recherche de Philip Morris, en le présentant comme un pionnier scientifique de la chimie des nouvelles générations de cigarettes. Omettant d’évoquer sa casquette industrielle et ses conflits d’intérêts, les journalistes le questionneraient sur l’avenir de la cigarette, les enjeux sur la compétitivité économique du pays et les régulations adéquates.
If you follow American politics, you may think of a “pro-business” platform as something that consists of the things that business interests often lobby for: lower taxes, less regulation period, less oversight, less protection for labor, less responsibility of all sorts to anyone other than their own shareholders. Yet all these things that they seek (for the purpose of increasing short term profits) are things that they assume will exist within the context of the basic principles outlined above. Businesses want lower taxes, but they still want well-maintained roads. They want weaker labor protections, but they still want a healthy and well educated workforce. They want less regulation, but they still want transparent laws and functional enforcement. Their short term greed, unwise and distasteful as it may be, is only something they fight for because they assume that the big, fundamental pillars of society and government that allow them to operate freely will always be in place.
The MP3 format, once the gold standard for digital audio files, is now free. The licensing and patents on MP3 encoders have expired, meaning you can now include them in your applications without paying royalties. For software developers and audio enthusiasts, this might seem like a big deal. But, surprisingly, almost no one noticed. Why? Because the world of technology has changed so drastically that MP3's significance has faded into the background.