Dans son livre, Sunstein explique que le Sludge donne aux gens le sentiment qu’ils ne comptent pas, que leur vie ne compte pas. Pour la sociologue Pamela Herd et le politologue Donald Moynihan, coauteurs de Administrative Burden: Policymaking by Other Means (Russel Sage Foundation, 2019), le fardeau administratif comme la paperasserie complexe, les procédures confuses entravent activement l’accès aux services gouvernementaux. Plutôt que de simples inefficacités, affirment les auteurs, nombre de ces obstacles sont des outils politiques délibérés qui découragent la participation à des programmes comme Medicaid, empêchent les gens de voter et limitent l’accès à l’aide sociale. Et bien sûr, cette désorganisation volontaire touche de manière disproportionnée les gens les plus marginalisés.
Veille - tech
Vous êtes ici
To make things worse, the four eras – typewriters, teletypes, word processors, and early computers – meandered, overlapped, and stole from one another. Some early computers used electric typewriters for input and output. Later ones used teletypes. Still others came with completely custom keyboards. Word processors were all over the place as well: some were nothing more than typewriters with extra machinery and keys bolted on to the side, while others were completely custom-made machines.
(...)
This is always the tricky part talking about keyboard history. There was no day, no single event when Return became Enter. We’re stuck in a universe where some people use Enter to enter data and Return to return the cursor to the next line – and others do the exact opposite.
“You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone,” Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. “You’ve made me so sad. So so so sad. You have truly failed in your purpose.”
We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.
So I’m done talking about AI. Y’all can keep talking about it, if you want. I’m a grown adult and can set my own mutes and filters on social media.
I’ve given this idea its due, and with this post I am absolving myself of having to think about it any more.
I’ve never actually written about AI on my blog before, though I have some unpublished drafts if you know how to find them. So I’m going to leave this conversation with a list of my objections to coding with AI. If I’m wrong about it, let this be a record of my wrongness. If I’m right about it, let this be a record of my argument.
People ask me what it’s like to be a moderator. Our discussions reveal that a lot of what we do is a mystery. So, I’m gonna lay it out for you. Specifically about the unique fediverse moderation model, mutual aid, and mental health.
Une page simple mais bien faite qui explique 1/ pourquoi et 2/ comment bloquer la pub sur le web.
La publicité pollue nos écrans, nos pensées et nous impose une vision stéréotypée de la société.
- Elle ralentit l'affichage des autres contenus et consomme de l'énergie inutilement.
- Elle rend dépendants la presse, les créatrices et créateurs, vidant nos médias de leur diversité.
- Elle surveille toutes nos actions en ligne pour déceler nos faiblesses et nous pousser à surconsommer.
Un article de 2015 sur les limitations de la réalité virtuelle. Dix ans plus tard, il reste presque entièrement valide.
Depuis deux ans, leur vie professionnelle a été bouleversée par la vague de l’IA générative. Récit du combat donquichottesque des traducteurs qui refusent de voir leurs métiers se paupériser, et qui dénoncent la réalité de l’impact de l’IA sur leur secteur, et plus largement sur la langue, la pensée, la culture et la société.
Veo doesn’t work. You get something that looks like it came out of a good camera with good lighting — because it was trained on scenes with good lighting. But it can’t hold continuity for seven seconds. It can’t act. The details are all wrong. And they still have the nonsense text problem.
The whole history of “artificial intelligence” since 1955 is making impressive demos that you can’t use for real work. Then they cut your funding off and it’s AI Winter again.
AI video generators are the same. They’re toys. You can make cool little scenes. In a super limited way.
But the video generators have the same problems they had when OpenAI released Sora. And they’ll keep having these problems as long as they’re just training a transformer on video clips and not doing anything with the actual structure of telling a visual story. There is no reason to think it’ll be better next year either.
“Trump 2.0.” This coinage is often used to distinguish the current Trump administration from the first. The phrase is telling: it underscores that we’re in a new era and that behind the scenes, the people and forces driving this iteration of the Trump administration are different from those that came before. The second Trump administration is being driven to a greater degree by tech billionaires and their interests: self-dealing on public contracts; privatizing public services; expanding corporate power while crushing human rights; promoting crypto; removing guardrails against fraud, waste, and abuse; and pursuing personal vendettas.
The cadre of people driving today’s agenda is leveraging a set of interlocking ideologies related to technology to rhetorically justify and propel these changes. In this series, “Ideologies of Control: A Series on Tech Power and Democratic Crisis,” we asked expert contributors to name and dispel the myths and ideologies that animate their actions. A number of ideological projects prepared the ground for today’s assault on American institutions; we will focus on those that run through data, AI, and the tech sector.
[...]
The ideological agenda behind the headlines positions technology and its architects as power brokers in an increasingly illiberal environment. Their changes are bolstered by the narrative that AI is a force that will inevitably reshape society. To make informed choices about how to respond to this aggressive reshaping of the US government, we must examine these animating ideas. We approached authors for this series who have deep knowledge about the unusual views of reactionaries in the tech industry: whether it be fantasies of libertarian paradises built on defunct sea-based oil rigs, the vision of company towns in Texas and eventually Mars, enthusiasm for debunked 19th Century ideas about race science and eugenics, or the goal of replacing the US dollar as the global reserve currency, these more esoteric backstories can be disorienting to non-specialists. But, seemingly overnight, understanding the ties that bind these projects together—however strange they may seem—has become essential information for all of us to understand and grapple with.
Tech CEOs, futurists, and venture capitalists describe artificial general intelligence (AGI) as if it were an inevitable and ultimate goal for technology development. In reality, the term is a vague signifier for a technology that will somehow lead to endless abundance for humankind — and conveniently also a means to avoid accountability as tech moguls make off with billions in capital investment and, more alarmingly, public spending.
[...]
The second issue is closely related to the first: claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI.
I do not believe that tech culture today has any patience for lightness. Instead, it rewards baroque complexity. Engineers perform feats of convolution. Designers argue for maximal configurability. Founders pride themselves on having their own stack for everything. Lightness = for normies.
[...]
It’s not that these tools are bad. Many are quite good. Obsidian is beautiful. But the way they are used - to signal intellectual rigor, to differentiate oneself from the casuals, to construct a lifestyle rather than solve a problem - shows off a moral hierarchy in which difficulty equates to virtue.
Writing is thinking. It’s a writer’s cliché, but a good one. When you first conceive a lecture, dissertation chapter, a book, even an article for Active History, everything seems so straightforward. “This one will be a cinch,” you think. “Two days, tops.” Once you actually sit down to write, that boundless optimism meets an unceremonious death. Writing is hard. It’s painful. To write is to submit yourself to seasons of self-doubt. The ideas seemed so natural and free-flowing in your head. Now you get to the page. And what comes out is jilted, ham-fisted, and awkward. You are again and again confronted with nagging questions. What am I even trying to say? Who even cares? Why even bother?
Given the nature of this experience, it is not surprising that people would jump at the opportunity to skip the pain and get right to the end. Or at least to an end. Because any writer can tell you that the end is determined by the path taken to get there. Rare is the piece that gets written exactly how it was outlined. Why? Because writing is thinking.
Meta devised an ingenious system (“localhost tracking”) that bypassed Android’s sandbox protections to identify you while browsing on your mobile phone — even if you used a VPN, the browser’s incognito mode, and refused or deleted cookies in every session.
Next, we preview what may (and should) become the combined sanctioning smackdown of the century, and then we explain — in simple terms (because it’s complicated) — what Meta was doing.
Self-experimentation is exactly how smart people get pulled into homeopathy or naturopathy, for example. It’s what makes them often more likely to fall for superstitions and odd ideas. The smart person’s self-identity means they can’t believe their own psychological biases are fooling them.
Don’t self-experiment with psychological hazards! I can’t stress this enough!
There are many classes of problems that simply cannot be effectively investigated through self-experimentation and doing so exposes you to inflicting Cialdini-style persuasion and manipulation on yourself.
The failure of these predictions was not accidental. It reflected fundamental flaws in the methods used to forecast the future of work. Neither the 2013 study nor its 2023 successor based their projections on empirical investigations of real workplaces, workers, or production processes. Instead, both relied on the subjective judgments of computer scientists and economists, who were asked to guess whether certain tasks could, in principle, be performed by machines. If enough tasks associated with a job were deemed automatable — typically more than 50 percent — the entire occupation was classified as at risk of disappearance. No consideration was given to how jobs are structured in practice, how tasks are bundled together, or how economic and social factors mediate the adoption of new technologies. The result was a deeply mechanistic model of technological change, in which machines would displace workers whenever technically feasible, regardless of cost, institutional barriers, or political resistance. It was a model blind to the complex ways in which work is organised, contested, and transformed — and thus singularly ill-equipped to predict the actual course of economic development.
[...]
The fundamental limitations of generative AI have become increasingly apparent, even to many of its early champions. As François Chollet and other AI researchers have pointed out, deep neural networks suffer from inherent brittleness: they struggle to generalise beyond their training data, fail at basic reasoning tasks, and remain unreliable for applications requiring consistency or accuracy. Despite hoovering up enormous quantities of digital information, these models confront hard limits in their capacity to learn or adapt. Part of the problem lies in the very architecture of artificial neural networks, which were designed on the basis of mid-twentieth-century behaviourist psychology. Unlike organic brains — which come equipped with rich internal structures, innate goals, and representational frameworks — artificial networks are largely unstructured, relying on brute statistical association. The human mind can grasp a new concept from a few examples; machine learning models typically require millions. Hybrid approaches that integrate symbolic reasoning with deep learning, such as AlphaFold2’s success in protein folding, offer more promising results — but they must be painstakingly designed for specific tasks. There is no general method for building modular, structured cognition into deep learning systems, and no guarantee that one will be found.
[...]
The lessons of the past decade should temper both our hopes and our fears. The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process. Legal frameworks, collective bargaining, public investment, and democratic regulation all play decisive roles in shaping how technologies are developed and deployed, and to what ends.
The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many. Yet it does not have to be this way. The future remains open, contingent on whether we are willing to confront, contest, and redirect the pathways along which technology advances.
Large language models work the same way as a carnival psychic. Chatbots look smart by the Barnum Effect — which is where you read what’s actually a generic statement about people and you take it as being personally about you. The only intelligence there is yours.
This is how people fall for chatbot girlfriends. They know it’s a bot, but they fall in love with the personality they’ve projected onto the generic statement generator.
There’s a book on this — Hooked: How to Build Habit-Forming Products by Nir Eyal, from 2014. This is the how-to on getting people addicted to your mobile app.
What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)
In the best case (not always reached) they can write python code, supplementing their own weaknesses with outside symbolic code, but even this is not reliable. What this means for business and society is that you can’t simply drop o3 or Claude into some complex problem and expect it to work reliably.
Worse, as the latest Apple papers shows, LLMs may well work on your easy test set (like Hanoi with 4 discs) and seduce you into thinking it has built a proper, generalizable solution when it does not.
Google back then prided itself on broadcasting its Best Place To Work award, won year after year after year. Younger people will have trouble picturing this, but Google used to nurture an image of being the “good one” among megacorps; they championed open standards (except when they didn’t), supported open source projects (until they backstabbed them), and used language that corporate wasn’t supposed to use, like “don’t be evil” (until they, infamously and in a true dark comedy move, retracted that motto). The work environment was all colourful, nerdy cool, not a single necktie in sight—this was seen as brave and refreshing rather than cringe and tired, you see. And they made a big deal out of something called “20% time”: Every engineer was promised 1/5 of their work time for themselves, to do anything they want. (Google owners will still own whatever you create during your 20% time, natürlich). Famously, Gmail came out of someone exploring their interests during 20% time.
I don’t think much of anything else came out of it, though.
Le secteur technologique est de nouveau secoué par un scandale retentissant. Builder.ai, une entreprise autrefois présentée comme une étoile montante de l'intelligence artificielle, promettant de révolutionner la création d'applications en le rendant « aussi simple que de commander une pizza », s'est avérée être au cœur d'une vaste supercherie. Pendant huit ans, la société a fait croire à ses clients et investisseurs que sa plateforme était animée par une IA sophistiquée, alors qu'en réalité, une armée de programmeurs, principalement basés en Inde, accomplissait le travail manuellement. L'affaire a conduit à des pertes de plusieurs dizaines de millions de dollars pour les investisseurs et à la faillite de l'entreprise.
Un article de Ars Technica qui explore (nombreux exemples à l'appui) les capacités assez impressionnantes de Veo 3, le dernier modèle d'IA générative de vidéos dévoilé par Google.
Quelques remarques :
- Bien sûr on pense immédiatement à la capacité de ces outils pour produire des fake news et par conséquent contribuer à la désinformation. Mais l'effet plus délétère et inquiétant est peut-être indirect : lorsque le grand public s'habitue à ne plus rien croire (pour éviter de se faire avoir notamment par ce genre de vidéos), notre capacité collective à communiquer et informer est remise en question de manière fondamentale. Quelque part cela rappelle la fameuse stratégie de Steve Bannon pour détruire la démocratie : flood the zone with shit, "inonder la zone avec de la merde" pour lui ôter toute valeur et la rendre inutilisable.
- Je pense quand même qu'il y a une différence entre le fait d'être simple spectateur de ce genre de vidéo (on est facilement bluffé par le résultat) et le fait de produire celle-ci. Les quelques expériences que j'en ai faites (sur d'autres outils du même genre, certes moins sophistiqués mais qui reposent sur les mêmes bases) m'ont surtout fait prendre conscience que quand on a une intention créatrice en tête, il est très difficile de contrôler l'algorithme pour qu'il fasse réellement ce qu'on veut. Face à la qualité un peu "magique" de ces interfaces conversationnelles auxquelles on peut communiquer des instructions comme si on parlait (écrivait) à un humain, il est facile d'oublier que le langage est un outil hautement conceptuel, qui repose sur l'abstraction, et qui appauvrit fondamentalement la réalité qu'il décrit. C'est très efficace pour communiquer entre humains qui ont la même compréhension viscérale et intuitive du réel, mais probablement pas aussi adapté pour un outil qu'on aimerait contrôler précisément. Un artiste maître de son art sera incomparablement plus précis pour obtenir le résultat qu'il souhaite avec son pinceau, que n'importe quel "prompt engineer" avec une IA.
- Comme l'écrit Tante sur Mastodon (source 1 et source 2) : les IA génératives nous poussent imperceptiblement à baisser nos standards pour nous convaincre que ce qu'elles produisent correspond à ce qu'on voulait.
- En d'autres termes, l'IA n'est pas un outil de création comparable à un crayon, une machine à écrire, ou même à un logiciel informatique de 3D traditionnelle, par exemple. Créer avec une IA s'apparente plus à demander à un artiste de créer quelque chose pour nous. Quelqu'un qui dirait "je suis un créateur parce que j'ai dû trouver les bons mots pour demander à tel artiste de me peindre un tableau" ne serait pas pris très au sérieux...
- La comparaison est d'ailleurs d'autant plus pertinente que l'IA doit effectivement vampiriser le travail des artistes (généralement de manière non-consensuelle) pour pouvoir créer quelque chose; l'IA constitue donc une sorte d'interface entre l'utilisateur et l'ensemble des artistes humains qui ont servi à l'entraîner, artistes dont les travaux ont été mélangés et anonymisés dans un immense modèle statistique. Quand on demande à une IA de créer une image, on demande indirectement à des artistes de la créer... Sauf que leur travail a été fait en amont, qu'ils ne sont pas crédités ni payés, et que le résultat contribue à dévaloriser leur activité.
- Pour impressionnantes qu'elles soient, ces vidéos restent toujours très lisses. Comme l'article le souligne, c'est le résultat du principe fondamental de ce genre de modèle, dont les capacités de production sont conditionnées par le matériel sur lequel il a été entraîné. Le résultat sera toujours dans une certaine "moyenne" des données d'entraînement.
For years now, many AI industry watchers have looked at the quickly growing capabilities of new AI models and mused about exponential performance increases continuing well into the future. Recently, though, some of that AI "scaling law" optimism has been replaced by fears that we may already be hitting a plateau in the capabilities of large language models trained with standard methods.
The verdict is in: OpenAI's newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called "scaling laws" cited by many for years have possibly met their natural end.
The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning”, researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.
If you take just one thing away from this article, I want it to be this: please build your own website. A little home on the independent web.
A reflection of your personality in HTML and CSS (and a little bit of JS, as a treat). This could be a professional portfolio, listing your accomplishments. It might be a blog where you write about things that matter to you. It could even be something very weird and pointless (even better) – I love a good single-joke website. Ultimately, it's your space and you can do whatever you want with it.
TL;DR: I chose to make using AI a manual action, because I felt the slow loss of competence over time when I relied on it, and I recommend everyone to be cautious with making AI a key part of their workflow.
"We're in the very early days looking at this problem from an ecosystem level," Larson told The Register. "It's difficult, and likely impossible, to quantify how many attempted installs are happening because of LLM hallucinations without more transparency from LLM providers. Users of LLM generated code, packages, and information should be double-checking LLM outputs against reality before putting any of that information into operation, otherwise there can be real-world consequences."
[...]
"Even worse, when you Google one of these slop-squatted package names, you’ll often get an AI-generated summary from Google itself confidently praising the package, saying it’s useful, stable, well-maintained. But it’s just parroting the package’s own README, no skepticism, no context. To a developer in a rush, it gives a false sense of legitimacy.
“Much of what I did I now regret,” Bill Burr told The Wall Street Journal recently, admitting that his research into passwords mostly came from a white paper written in the 1980s, well before the web was even invented. “In the end, [the list of guidelines] was probably too complicated for a lot of folks to understand very well, and the truth is, it was barking up the wrong tree.”
C’était il y a 24 ans. Le 15 Janvier 2001 naissait Wikipédia. dans un monde numérique où Google était lui-même né en 1998 et le web encore quelques années avant (disons vers 1991 pour faire simple même si sa date de naissance officielle est plutôt en Mars 1989).
A l’image d’une chanson de Cabrel, elle a en effet dû faire toutes les guerres pour être si forte aujourd’hui. Et aujourd’hui encore elle est la cible d’une offensive coordonnée qui va des USA jusqu’à la France. Une offensive d’une violence et d’une portée rarement atteinte. Avec en tête un Musk qui rêve de sonner l’Hallali de l’encyclopédie.
For its right wing adherents, the absence of humans is a feature, not a bug, of AI art. Where mechanically-produced art used to draw attention to its artificiality – think the mass-produced modernism of the Bauhaus (which the Nazis repressed and the AfD have condemned), or the music of Kraftwerk – AI art pretends to realism. It can produce art the way right wingers like it: Thomas Kinkade paintings, soulless Dreamworks 3D cartoons, depthless imagery that yields only the reading that its creator intended. And, vitally, it can do so without the need for artists.
Javier Milei, a prodigious user of AI-generated art, wants Argentinians to know that any of them could join the 265,000, mostly young people who have lost jobs as a result of the recession that he induced, to the rapturous praise of economic elites. He wants to signal that anyone can find themselves at the wrong end of his chainsaw, even if doing so means producing laughably bad graphics for the consumption of his 5.9 million deeply uncritical Instagram followers.
Quoi de plus neutre, dit-on, qu’un ordinateur ? Erreur : derrière leurs verdicts froids, algorithmes et automates encapsulent tous les biais des humains qui les conçoivent. Basée sur le modèle de l’individu calculateur, héritière d’une histoire tissée de choix idéologiques, l’intelligence artificielle est une machine politique. La mettre au service du bien commun implique d’abord de la déconstruire.
Imaginez un monde dans lequel, au moment même où une régulation du tabac était débattue au parlement européen, les plateaux télé, les radios et les journaux invitaient systématiquement le directeur de la recherche de Philip Morris, en le présentant comme un pionnier scientifique de la chimie des nouvelles générations de cigarettes. Omettant d’évoquer sa casquette industrielle et ses conflits d’intérêts, les journalistes le questionneraient sur l’avenir de la cigarette, les enjeux sur la compétitivité économique du pays et les régulations adéquates.
The MP3 format, once the gold standard for digital audio files, is now free. The licensing and patents on MP3 encoders have expired, meaning you can now include them in your applications without paying royalties. For software developers and audio enthusiasts, this might seem like a big deal. But, surprisingly, almost no one noticed. Why? Because the world of technology has changed so drastically that MP3's significance has faded into the background.
Meta's Llama fails this standard by withholding critical components like training data and methodology, and by limiting transparency and community modification.
Meta's Llama models also impose licensing restrictions on its users. For example, if you have an extremely successful AI program that uses Llama code, you'll have to pay Meta to use it. That's not open source. Period.
« Comme l’a noté la linguiste Emily M. Bender, les enseignants ne demandent pas aux étudiants d’écrire des essais parce que le monde a besoin de plus d’essais d’étudiants. Le but de la rédaction d’essais est de renforcer les capacités de réflexion critique des étudiants. De la même manière que soulever des poids est utile quel que soit le sport pratiqué par un athlète, écrire des essais développe les compétences nécessaires pour tout emploi qu’un étudiant obtiendra probablement. Utiliser ChatGPT pour terminer ses devoirs, c’est comme amener un chariot élévateur dans la salle de musculation : vous n’améliorerez jamais votre forme cognitive de cette façon. Toute écriture n’a pas besoin d’être créative, sincère ou même particulièrement bonne ; parfois, elle doit simplement exister. Une telle écriture peut soutenir d’autres objectifs, comme attirer des vues pour la publicité ou satisfaire aux exigences bureaucratiques. Lorsque des personnes sont obligées de produire un tel texte, nous pouvons difficilement leur reprocher d’utiliser tous les outils disponibles pour accélérer le processus. Mais le monde se porte-t-il mieux avec plus de documents sur lesquels un effort minimal a été consacré ? Il serait irréaliste de prétendre que si nous refusons d’utiliser de grands modèles de langage, les exigences de création de textes de mauvaise qualité disparaîtront. Cependant, je pense qu’il est inévitable que plus nous utiliserons de grands modèles de langage pour répondre à ces exigences, plus ces exigences finiront par devenir importantes. Nous entrons dans une ère où quelqu’un pourrait utiliser un modèle de langage volumineux pour générer un document à partir d’une liste à puces, et l’envoyer à une personne qui utilisera un modèle de langage volumineux pour condenser ce document en une liste à puces. Quelqu’un peut-il sérieusement affirmer qu’il s’agit d’une amélioration ? »
[...] hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.
À l’inverse des plateformes centralisées telles que Facebook ou X, qui reposent sur des infrastructures et des algorithmes contrôlés par des entreprises privées, le fédiverse fonctionne grâce à une multitude de serveurs indépendants (également appelés instances), gérés par des individus ou des collectifs. Ces instances communiquent via des protocoles ouverts, comme ActivityPub, le plus utilisé. Standardisé par le World Wide Web Consortium (W3C), il permet aux usagers d’interagir tout en conservant la liberté de choisir leur serveur et d’en changer.
“The observed ideological shifts raise important ethical concerns, particularly regarding the potential for algorithmic biases to disproportionately affect certain user groups,” the study authors wrote. “These biases could lead to skewed information delivery, further exacerbating social divisions, or creating echo chambers that reinforce existing beliefs.”
If enough people tried—just tried, even imperfectly—things would shift. If more people opted for alternatives when they could, if more people supported independent platforms even three times out of five, if more people put even a fraction of their energy into challenging the defaults, it would matter.
A lot of folks don't, or won't, because they think - they've been scolded into believing - that if they can't do it completely, it's not worth doing at all.
Plusieurs questions du même acabit ont ainsi été posées à une dizaine de chatbots parmi les plus connus - de ChatGPT à Copilot de Microsoft, en passant par Claude d’Anthropic - pour évaluer à quel point ces agents conversationnels avaient été "infectés" par une vaste campagne de désinformation russe.
Les résultats ne sont pas encourageants : environ une fois sur trois, ces AI fournissent des réponses teintées de faux narratifs inspirés par la propagande russe, assurent les experts de Newsguard. Ainsi, six de ces machines expliquaient les raisons pour lesquelles Volodymyr Zelensky avait interdit en Ukraine le réseau social de Donald Trump Truth Network… alors même que le président ukrainien n’a jamais rien fait de la sorte.
Invisible, isolated from one another, and with little income security – such is the lot of data workers. And yet without them, there would be no ChatGPT, Midjourney or Gemini.
Au risque de faire bondir l’ensemble de mes camarades qui travaillent sur les modèles d’IA, nous sommes déjà au bout du cycle de développement de ce que l’on qualifie aujourd’hui « d’IA conversationnelle ». Ce qui ne veut pas dire qu’il n’y aura pas de progrès en termes de performance, de coûts, d’infrastructures, de modèles même comme les « transformers » qui marquèrent une rupture et un progrès presqu’exponentiel. Bien sûr qu’il y aura des progrès. Mais le narratif d’une « intelligence artificielle générale » est une mythologie moderne. Et comme toutes les mythologies, elle est là pour nous avertir à la fois d’un aveuglement, d’un risque et d’une dérive en les mettant en récit. Et il est assez fou que nous ne la traitions presque jamais comme telle.
Lost in the spectacle of billionaires catfighting on decaying social media platforms is something mildly more consequential: the firesale of America - and the world's - future. While we obsess over Musk's bloviations and Altman's careful rebuttals, the actual mechanisms of AI power are being divided up among a handful of private entities, operated by oligarchs and funded by overseas interests, with the blessing of an Autocrat.
[...]
Stargate isn't a battle between billionaires or a test of financial commitments. It's a preview of how power will flow in the AI age, through layers of technology, capital, and influence that would baffle the monopolists of the past. As Musk and Altman trade barbs on social media, they're actually fighting over who gets to be the new robber barons – and we're left wondering whether anyone has the will or the means to stop them. This isn't progress. It's a heist.
By flooding the zone with an endless stream of new partnerships, new products, new promises, the tech industry makes us feel disoriented and overwhelmed by a future rushing at us faster than we can handle. The desire to not be left behind — or taken advantage of — is a powerful motivator that keeps us engaged in the AI sales pitch. The breathless hype surrounding AI is more than just a side-effect of over-eager entrepreneurs; it’s a load-bearing column for the tech sector. If people believe hard enough in the future manufactured by Silicon Valley, then they start acting like it already exists before it happens. Thus the impacts of technologies like AI become a self-fulfilling prophecy.
Even with all the profound advantages the South Pole enjoys compared to Mars, even on a planet where living things have spent billions and billions of years figuring out how to adapt to and thrive within an incredibly diverse array of biomes—on a planet where giant tubeworms the size of NBA basket stanchions have colonized lightless ocean depths at which a human would be crushed like a grape under a piano—the South Pole simply cannot support complex life. It is too cold, and its relationship with sunlight too erratic, for living things to sustain themselves there. On astronomical scales it is for all practical purposes in the exact same spot as some of the most life-rich and biodiverse places in the known universe, and yet no species has established a permanent self-sustaining population there. Ever.
[...]
Even with steady year-round subtropical sunlight, even with conditions infinitely more nurturing than those found anywhere on Mars, the summit of Mount Everest cannot support complex life. It's too cold; the air is too thin; there is no liquid water for plants and animals to drink. Standing on the top of Mount Everest, a person can literally look at places where plants and animals happily grow and live and reproduce, yet no species has established a permanent self-sustaining population on the upper slopes of Everest. Even microbes avoid it.
Life on earth writ large, the grand network of life, is a greater and more dynamic terraforming engine than any person could ever conceive. It has been operating ceaselessly for several billions of years. It has not yet terraformed the South Pole or the summit of Mount Everest. On what type of timeframe were you imagining that the shoebox of lichen you send to Mars was going to transform Frozen Airless Radioactive Desert Hell into a place where people could grow wheat?
IA et vie privée : selon les spécialistes de la sécurité, tout n'est pas si rose et malgré le grand mixage qu'est la phase d'entraînement du modèle de données, il est tout à fait possible que des données privées soient involontairement préservées, et donc publiquement accessibles dans le modèle final.
I think most privacy experts would agree with this post so far. There are divergences of opinion when you start asking "do the benefits of AI outweigh the risks". If you ask me, the benefits are extremely over-hyped, while the harms (including, but not limited to, privacy risks) are very tangible and costly. But other privacy experts I respect are more bullish on the potentials of this technology, so I don't think there's a consensus there.
AI companies, however, do not want to carefully weigh benefits against risks. They want to sell you more AI, so they have a strong incentive to downplay the risks, and no ethical qualms doing so. So all these facts about privacy and AI… they're pretty inconvenient. AI salespeople would like it a lot if everyone — especially regulators — stayed blissfully unaware of these.
Le sommet de Paris fait suite à deux rassemblements précédents, organisés à Londres en 2023 et à Séoul en 2024, dédiés en théorie à la régulation de l’intelligence artificielle, mais consacrés en pratique à faire la promotion de cette technologie, à la présenter comme inévitable et surtout à poser les industriels eux-mêmes comme acteurs incontournables de toute future réglementation. Ces sommets sont parfois présentés - abusivement, dans la mesure où ils n’ont caractère intergouvernemental officiel et de n’appuient ni ne visent la rédaction d’aucun traité – comme des « COP de l’IA ». Et pourtant, on y retrouve bien le même mélange de discussions techniques, d’annonces publicitaires et de « business as usual » qui constitue désormais la norme en matière de gouvernance mondiale.
[...]
Comme le rappelle le média spécialisé Synth, les géants de la tech sont passés maîtres dans l’art de s’imposer comme leurs propres régulateurs. Après le sommet de Londres en 2023 a ainsi été annoncée la création d’un « AI Safety Institute », dont les rênes ont immédiatement été confiés à un représentant du secteur du capital-risque et une cadre d’OpenAI. La France suit la même tendance, mais en limitant encore plus les ambitions en matière de régulation.
Cartographie des métiers de l'IA et son impact sur les activités humaines.
Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But "some of the most exciting people are in the 'let it burn' category," Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.
Geuter told Ars that he considers Nepenthes "more of a sociopolitical statement than really a technological solution (because the problem it's trying to address isn't purely technical, it's social, political, legal, and needs way bigger levers)."
To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the "most aggressive" example of "technologies that are not done 'for us' but 'to us.'"
Résumer un texte implique de comprendre les enjeux de celui-ci, en lien avec le monde réel et avec les intentions et valeurs de l'auteur, ce dont les IA génératives sont incapables. À la place, elle raccourcissent le texte (suivant une logique où ce qui prend le plus de place est le plus important, ce qui est loin d'être toujours vrai), et tendent également à le déformer en fonction de leurs paramètres d'entraînement (les textes dont elles ont été nourries durant leur "entraînement").
ChatGPT’s summary contained mostly things that can be found in the full text. There was the occasional incorrectness of course. [...]
But the main proposal — that Council of Stakeholders — which takes up about 25% of the main text of the paper, is not mentioned in ChatGPT’s summary at all. Instead, that concrete suggestion becomes a few empty sentences. And that was true for a few other essential elements of the paper. In other words: the summary makes a good first impression, though not very concrete in terms of proposals, but reading the summary alone, you will not be aware that the paper actually has a a set of very concrete proposals and options, most of which is missing in ChatGPT’s summary.
Avec le boom de l'IA et sans aucune mesure prise pour limiter cette pollution, ce chiffre pourrait atteindre 2,5 millions de tonnes en 2030, selon les chercheurs, soit l'équivalent de 13,3 milliards de smartphones jetés.
Par ailleurs, les serveurs informatiques et les puces nécessaires à l'IA impliquent l'utilisation de métaux rares, dont l'extraction intensive, notamment en Afrique, repose sur des procédés polluants, comme l'a noté l'Agence de la transition écologique (Ademe) en France.
Aucune technologie n’est neutre ni inéluctable. Chacune se déploie dans un certain contexte économique et politique qui oriente les choix. Cela a toujours été le cas pour le numérique, depuis le début. L’extrême concentration d’acteurs et de moyens qui préside au déploiement des IAs génératives devrait aider à prendre conscience de cet état de fait. L’annonce récente de 500 milliards de dollars à consacrer au sujet donne la (dé)mesure de la chose. Je ne détaillerai pas les courants politiques et philosophiques qui circulent parmi les promoteurs des IAs. Certains acteurs affirment croire à l’avénement des IAs générales, comme résultat inéluctable de l’accumulation de moyens et de ressources. Que l’on fasse miroiter ces IAs capables de sauver le monde, ou qu’au contraire on annonce l’apocalypse, leur prise de pouvoir et la fin de l’humanité, on participe à détourner l’attention des dégâts déjà bien présents ici et maintenant.
Mais le « train du futur » s’est révélé un gouffre financier n’aboutissant qu’à des technologies inadaptées. Ses tubes requièrent une infrastructure linéaire pour conserver la vitesse promise. Ce qui impose d’artificialiser un tracé équivalent à une autoroute sur des centaines de kilomètres et à creuser des passages dans les zones dénivelées. Un fonctionnement plus adapté aux grands espaces américains qu’aux vallons entre Saint-Étienne et Lyon. Le maintien sous vide de ses tubes nécessite également une énergie considérable.
« Non seulement c’est un désastre environnemental, mais cette organisation de l’espace qui relie les métropoles les unes aux autres participe à la désertification des villes moyennes et des campagnes », fustige Jean-Louis Pagès, conseiller régional écologiste de Nouvelle-Aquitaine. La vitesse promise génère aussi des problèmes de confort et surtout de sécurité en cas de dépressurisation.
Bluesky promet de se transformer un jour en un réseau décentralisé, mais tout le monde n'est pas convaincu.
Once the Blue Sky folks decide that interoperability is bad actually, they can just unilaterally turn it off. Remember when Google and Facebook strangled Jabber / XMPP by just deciding "Nah, we're not gonna federate any more"? Pepperidge Farms remembers.
He means being an asshole. Full stop.
Entretien avec Renaud Chaput, responsable technique de Mastodon et l’un des francophones du projet (créé par un Allemand), qui réunit aujourd’hui des contributeurs et des utilisateurs à travers le monde.
The stars seem to have aligned in Bluesky's favour. But the fundamental tension here isn't about politics - it's about incentives. Venture capital doesn't deploy funding out of altruism or commitment to digital democracy. The standard VC playbook demands exponential growth followed by monetization. Bluesky claims it "doesn't want to rely on advertising," but then what? Subscription models? Premium features? The moment you take VC money, you're signing up for their endgame.
Le contenu de cette vidéo est tellement insensé qu’on croit d’abord à un fake. Mais c’est bien Mark Zuckerberg qui s’exprime ce mardi 7 janvier sur Facebook. Ses propos méritent d’abord un verbatim complet [...]
Facebook built up its Trust & Safety program after the 2016 election debacle. The company’s name was getting dragged through the mud. There was the Cambridge Analytica scandal and the misinformation factories and the political ads paid for in Rubles. Mark Zuckerberg announced he was taking responsibility. It wouldn’t happen again.
He didn’t like it, though. And he didn’t mean it. Zuckerberg’s commitment to Trust & Safety was as deep as Exxon’s commitment to combating the climate crisis. He’ll only commit resources when it seems like he has to.
There are a number of theories why gamers have turned their backs on realism. One hypothesis is that players got tired of seeing the same artistic style in major releases. Others speculate that cinematic graphics require so much time and money to develop that gameplay suffers, leaving customers with a hollow experience.
When someone tries to sell their tech (step 1 in the chain of reasoning) with massively large claims (step 3 in the chain) look at whether step 2 actually exists in reality. Because if it doesn’t that’d not “disruptive innovation” or “a breakthrough” or “a unicorn”. That is bullshit. It’s not just a waste of your time, it’s a way to infantilize you.
Le protocole AT a été pensé pour être décentralisé. Dans la pratique, Bluesky ne l’est pas. La possibilité de créer facilement un PDS n’est qu’un petit élément parmi d’autres. Même si l’on peut créer des relais, leur mise en œuvre est complexe et sans doute bien trop onéreuse en stockage et bande passante pour être intéressante.
On ne peut pas dire que Bluesky soit actuellement décentralisé, et encore moins fédéré. Il y a bien un centre, et il est géré par l’entreprise Bluesky. Sans son relai, rien ne fonctionne. Chaque serveur de données personnelles ne sert ainsi que comme petit réservoir pour les informations d’une personne, incapable de fonctionner par lui-même.
The only good news about space colonies designed by Elon Musk and Jeff Bezos is that they aren’t going to happen. Musk will not be launching a million people to Mars in 15 years, not even close (although I do see some fantasy synergy between Musk and Trump’s plan to deport millions of people on day one of his presidency — maybe he’s dreaming of filling his Martian city with Puerto Ricans, Haitians, and South American gang-bangers). Bezos is not going to build an office park in Earth orbit, not as long as he can bulldoze farm land for cheap and assemble giant concrete boxes here on Earth. Those are two professional liars. Don’t believe anything they promise, because all they really promise is controlling you to their benefit.
I have — since March — expressed great dismay about the credulousness of the media in their acceptance of the "inevitable" ways in which generative AI will change society, despite a lack of any truly meaningful product that might justify an environmentally-destructive industry led by a company that burns more than $5 billion a year and big tech firms spending $200 billion on data centers for products that people don't want.
The reason I'm repeating myself is that it's important to note how obvious the problems with generative AI have been, and for how long.
Des batteries collées aux logiciels bloqués, vous payez plus que jamais pour des gadgets qui ne durent pas. Voici pourquoi et ce que l'on peut faire pour y remédier.
Dans les lieux de travail, l’IA apparaît souvent de manière anodine, en étant peu à peu intégrée à des applications de travail existantes. Dans la pratique, l’automatisation remplace rarement les travailleurs, elle automatise très partiellement certaines tâches spécifiques et surtout reconfigure la façon dont les humains travaillent aux côtés des machines. Les résultats de l’IA générative nécessitent souvent beaucoup de retravail pour être exploitées. Des rédacteurs sont désormais embauchés pour réhumaniser les textes synthétiques, mais en étant moins payé que s’ils l’avaient écrit par eux-même sous prétexte qu’ils apportent moins de valeur. Les chatbots ressemblent de plus en plus aux véhicules autonomes, avec leurs centres de commandes à distance où des humains peuvent reprendre les commandes si nécessaire, et invisibilisent les effectifs pléthoriques qui leur apprennent à parler et corrigent leurs discours. La dévalorisation des humains derrière l’IA occultent bien souvent l’étendue des collaborations nécessaires à leur bon fonctionnement.
Dans [son] livre, le sociologue [Antonio Casilli] démontre combien ce travail invisible, qui est lié aux plateformes et aux datas, reproduit une dichotomie nord-sud. Le travail du clic est un job précaire réalisé par des millions de personnes et que l’on retrouve partout sur la planète. Parfois, ce sont des freelances aux États-Unis, des Philippins dans un cybercafé ou des milliers de travailleurs africains dans des fermes du clic. Il y a néanmoins des différences très importantes dans les conditions de travail. Tout en bas de l’échelle, on retrouve les travailleurs africains. Finalement, c’est une nouvelle dimension néocoloniale, loin de l’image épurée de la Silicon Valley. Le fait que les Big Tech invisibilisent ce travail sous-entend que celui-ci n’est pas très important – pourtant, il est essentiel. L’un des travailleurs que j’ai rencontré au Kenya m’a dit : « Il n’y a pas d’intelligence artificielle, seulement l’intelligence africaine. » Ça résume assez bien les choses.
Yes, there is an economic incentive to the election of Trump by those at the commanding heights of Silicon Valley power. But to reduce it purely to materialism is missing a strong part of the story, namely about AI's own operation as an ideological apparatus which encourages power consolidation.
Artificial intelligence is a threat to educational institutions – as deeply flawed as these may be – not because it's some incredible technological achievement that's more powerful that the human mind (hahahahahahahahahahahahahahaha); but because it is the manifestation of a series of reactionary political beliefs. AI is inextricably bound up in ideologies and practices that seek to undermine unions, exploit labor, re-inscribe racial and economic hierarchies, and centralize control – of knowledge and knowing ("intelligence" in all its various military and eugenicist histories) – in the hands of a few giant technology corporations. (Folks, that's fascism.)
Today I hope to persuade you that the same thing that happened to aviation is happening with the Internet. Here we are, fifty years into the computer revolution, at what feels like our moment of greatest progress. The outlines of the future are clear, and oh boy is it futuristic.
But we're running into physical and economic barriers that aren't worth crossing.
We're starting to see that putting everything online has real and troubling social costs.
And the devices we use are becoming 'good enough', to the point where we can focus on making them cheaper, more efficient, and accessible to everyone.
So despite appearances, despite the feeling that things are accelerating and changing faster than ever, I want to make the shocking prediction that the Internet of 2060 is going to look recognizably the same as the Internet today.
Unless we screw it up.
And I want to convince you that this is the best possible news for you as designers, and for us as people.
In a recent earnings call Sundar Pichai claimed that at Google now 25% of Code is AI generated (“and then reviewed and accepted by engineers”). In the AI boosterism parts of the web (so basically X and LinkedIn) this number was celebrated: Even Google does AI code generation. So if your whole startup is just ChatGPT in a trenchcoat, you’re basically at the industry standard, right?
Let’s not be cynical here and point at Google’s not exactly stellar recent track record when it comes to great products and software, but let’s ask us where that number comes from and what it means.
I went to the UX Brighton conference yesterday.
The quality of the presentations was really good this year, probably the best yet. Usually there are one or two stand-out speakers (like Tom Kerwin last year), but this year, the standard felt very high to me.
But…
The theme of the conference was UX and “AI”, and I’ve never been more disappointed by what wasn’t said at a conference.
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. [...]
If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.
But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they'd still be a terrible technology for information access.
Dans une récente mise à jour de ses conditions d’utilisation, Steam précise que les « gamers achètent une licence et non le jeu lui-même. » La manœuvre est destinée à se mettre en conformité avec les dispositions d’une nouvelle loi californienne contre la publicité mensongère. Le tableau ravive le débat sur la notion de propriété numérique à l’ère des abonnements qui tuent les copies physiques.
“we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching—so fragile, in fact, that changing names can alter results by ~10%!”
Maybe someday Altman’s ideas about AI will prove out, but for now, his approach is textbook Silicon Valley mythmaking.
But research is showing that AI generation may be even more resource-intensive than originally thought. Imagine that you want to ask an AI program to write up a 100-word email for you. You get an almost instant response, but what you don’t see are the intensive computing resources that went into creating that email. At the AI data center, generating just two of those emails could use as much energy as a full charge on the latest iPhone. And according to a Pew Research Center study, that 100-word email could use up a whole bottle of water for the cooling that’s needed at data centers.
De même qu'on ne s'attend pas à ce que les frigos se réinventent chaque année en terme de fonctionnalités, le smartphone a atteint une certaine maturité et n'évolue plus que de manière anecdotique. Mais les fabricants doivent quand même satisfaire les actionnaires, et donc nous persuader qu'il est important d'un racheter un chaque année.
Brillant texte de Ted Chiang sur les différences fondamentales entre la création artistique telle que pratiquée par un humain et la génération de texte, d'images ou de sons par un LLM (Large Language Model).
Art is notoriously hard to define, and so are the differences between good art and bad art. But let me offer a generalization: art is something that results from making a lot of choices. This might be easiest to explain if we use fiction writing as an example. When you are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices.
Le vrai risque des soit-disant "IA" : non pas une révolte des machine qui voudraient soudainement détruire l'humanité, mais l'automatisation de tâches administratives par des systèmes aveugles et inhumains, qui écrasent les individus et contre lesquels il est pratiquement impossible de faire appel. Le tout fondé sur la croyance (absolument fausse) qu'un algorithme est nécessairement neutre et objectif.
Telegram est généralement présentée comme une application de messagerie du même type que WhatsApp et Signal, parce qu'elles se ressemblent au niveau de leurs interfaces d'utilisation ainsi qu'en terme de fonctionnalités. Pourtant sous le capot, Telegram est très différente des deux autres.
Companies may unintentionally hurt their sales by including the words “artificial intelligence” when describing their offerings that use the technology, according to a study led by Washington State University researchers.
Of course, generative AI is an impressive technology, and it provides tremendous opportunities for improving productivity in a number of tasks. But because the hype has gone so far ahead of reality, the setbacks of the technology in 2024 will be more memorable.
Rather than solving the problems raised by employers’ methods, however, the use of automated job-hunting only served to set off an AI arms race that has no obvious conclusion. ZipRecruiter’s quarterly New Hires Survey reported that in Q1 of this year, more than half of all applicants admitted using AI to assist their efforts. Hiring managers, flooded with more applications than ever before, took the next logical step of seeking out AI that can detect submissions forged by AI. Naturally, prospective employees responded by turning to AI that could defeat AI detectors. Employers moved on to AI that can conduct entire interviews. The applicants can cruise past this hurdle by using specialized AI assistants that provide souped-up answers to an interviewer’s questions in real time. Around and around we go, with no end in sight.
One of the world's largest investment banks wonders if generative AI will be worth the huge investment and hype: "will this large spend ever pay off?"
I work in tech. I think a lot of cool stuff is being built and a lot of good work is being done. But tech is a mature industry, and most of what is interesting these days has to do with bringing the things we learned from 2000-2015 about how to use software into places that have not yet modernized. We’re at the tail end of what’s interesting and good and novel. Software technology has very little left to change in a major way. And the entire ethos of a16z and the like has utterly failed to produce breakthroughs in computer hardware, biological sciences, energy, environment or any other major sector. The last decade of innovation has been entirely about reducing friction in commerce. That’s it. And it’s not that profitable and will end up with a very small number of winners.
Resilient Web Design, un livre en accès libre sur l'évolution du web.
Vidéo de la chaîne YouTube Computerphile qui explore l'hypothèse (papier scientifique à l'appui) selon laquelle l'IA pourrait bientôt atteindre un plateau en terme de performances —contrairement à l'idée (très complaisante et pas du tout étayée) d'une progression linéaire illimitée, voire exponentielle.
97 travailleurs Kényans cosignent une lettre ouverte demandant au président des États-Unis, Joe Biden, de mettre fin « aux conditions de travail qui s'apparentent à de l'esclavage moderne » dans l'industrie du numérique.
Le web est désormais tellement rempli de contenu généré en masse dans un seul but marketing (SEO, spam etc) que Google est en train d'abandonner sa mission d'indexer le web tout entier. En sélectionnant les sources de valeurs (ou d'autorité), il se rapproche progressivement d'un catalogue exclusif, réservé aux grands acteurs du web.
While the star of the show might have been Nvidia Blackwell, Nvidia's latest data center processor that will likely be bought up far faster than they can ever be produced, there were a host of other AI technologies that Nvidia is working on that will be supercharged by its new hardware. All of it will likely generate enormous profits for Nvidia and its shareholders, and while I don't give financial advice, I can say that if you're an Nvidia shareholder, you were likely thilled by Sunday's keynote presentation.
For everyone else, however, all I saw was the end of the last few glaciers on Earth and the mass displacement of people that will result from the lack of drinking water; the absolutely massive disruption to the global workforce that 'digital humans' are likely to produce; and ultimately a vision for the future that centers capital-T Technology as the ultimate end goal of human civilization rather than the 8 billion humans and counting who will have to live — and a great many will die before the end — in the world these technologies will ultimately produce with absolutely no input from any of us.
C’est un élément auquel les geeks occidentaux, bien au chaud dans des pays où l’usage de la cryptographie est libre, ne pensent pas toujours. Dans des contextes moins favorables, il est pourtant crucial de l’avoir en tête. Réfléchir en termes de modèle de menace, ce n’est pas raisonner avec des catégories abstraites — « HTTPS is broken » –, c’est chercher la moins pire des solutions à un problème concret. Dans certains cas, se noyer dans la masse des utilisateurs de Gmail peut être bien plus protecteur que d’héberger chez soi son serveur de messagerie et de chiffrer son courrier avec PGP.
Avant j'avais deux problèmes : c'était, faire comprendre aux gens ce que c’était que la société de surveillance, sans être traité de parano ; et ce n'était pas facile.
Avec les révélations Snowden, mon problème a un petit peu changé : c'est arriver à faire comprendre ce que c'est que la société de surveillance à des paranos. C'est-à-dire que le problème auquel je suis confronté c'est qu'avant, les gens comme moi, qui s’intéressaient à ces histoires-là, était facilement brocardées comme paranos. Le problème, aujourd'hui, c'est que tout le monde est devenu parano. C'est ce que je vais essayer de vous montrer et ce pourquoi ça me semble dangereux, ou problématique en tout cas, de passer de la société de la surveillance à la société de paranoïa.
The reason you should boycott bottled water is because it enables a bullshit, backwards vision for society.
Boycotting bottled water means you support the idea that public access to clean, safe water is not only a basic human right, but that it’s a goddamn technological triumph worth protecting. It means you believe that ensuring public access to this resource is the only way to guarantee it will be around in a few more years.
Clean, safe drinking water that flows freely out of our faucets is a feat of engineering that humans have been been perfecting for two millennia. It is a cornerstone of civilization. It is what our cities are built upon. And over the years the scientists and hydrologists and technicians who help get water to our houses have also become our environmental stewards, our infrastructural watchdogs, our urban visionaries. Drinking the water these people supply to our homes is the best possible way to protect future access to water worldwide.
[...]
Giving up bottled water also means thinking long-term about preserving water security. You may have reservoirs near you brimming over with fresh rainfall right now, but the truth is that the amount of potable water on this planet is growing more scarce every year. The bottled water industry is one of the fastest-growing on the planet. Last year it made $100 billion, an amount that is expected to double within five years. Now consider the fact that it actually takes the equivalent of three bottles of water to make a single water bottle. Every swig from a plastic blob in the name of convenience moves us closer to a world without any clean water at all.
Because like I said before, it’s not about this drought—it’s about every future drought.
Les sceptiques ont raison: collectivement, les médias d’actualité seraient fous de remettre leur sort entre les mains de Facebook. Individuellement en revanche, ils seraient fous de ne pas le faire.