Deutsch English Français Italiano |
<101euu8$1519c$1@dont-email.me> View for Bookmarking (what is this?) Look up another Usenet article |
Path: news.eternal-september.org!eternal-september.org!.POSTED!not-for-mail From: Ben Collver <bencollver@tilde.pink> Newsgroups: comp.misc Subject: AI is Dehumanizing Technology Date: Sat, 31 May 2025 13:07:20 -0000 (UTC) Organization: A noiseless patient Spider Lines: 242 Message-ID: <101euu8$1519c$1@dont-email.me> Injection-Date: Sat, 31 May 2025 15:07:21 +0200 (CEST) Injection-Info: dont-email.me; posting-host="eefa0ad69eee17f3cf2ffd9309cab6ea"; logging-data="1213740"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+Wwv/ui7q5rRpM4lbCPKNFrxRtGlWkWqg=" User-Agent: slrn/1.0.3 (Linux) Cancel-Lock: sha1:widvvmKebRGD4g0sbY2sBC7SDNg= AI is Dehumanization Technology =============================== Vintage black & white illustration of a craniometer measuring someone's skull. <https://thedabbler.patatas.ca/images/ai-dehumanization/ craniometer2.jpg> AI systems are an attack on workers, climate goals, our information environment, and civil liberties. Rather than enhancing our human qualities, these systems degrade our social relations, and undermine our capacity for empathy and care. The push to adopt AI is, at its core, a political project of dehumanization, and serious consideration should be given to the idea of rejecting the deployment of these systems entirely, especially within Canada's public sector. * * * At the end of February, Elon Musk--whose xAI data centre is being powered by nearly three dozen on-site gas turbines that are poisoning the air of nearby majority-Black neighborhoods in Memphis--went on the Joe Rogan podcast and declared that "the fundamental weakness of western civilization is empathy", describing "the empathy response" as a "bug" or "exploit" in our collective programming. <https://www.theguardian.com/technology/2025/apr/24/ elon-musk-xai-memphis> <https://www.theguardian.com/us-news/ng-interactive/2025/apr/08/ empathy-sin-christian-right-musk-trump> This is part of a broader movement among Silicon Valley tech oligarchs and a billionaire-aligned political elite to advance a disturbing notion: that by abandoning our deeply-held values of justice, fairness, and duty toward one another--in short, by abandoning our humanity--we are in fact promoting humanity's advancement. It's clearly absurd, but if you're someone whose wealth and power are predicated on causing widespread harm, it's probably easier to sleep at night if you can tell yourself that you're serving a higher purpose. And, well, their AI systems and infrastructure cause an awful lot of harm. To get to the root of why AI systems are so socially corrosive, it helps to first step back a bit and look at how they work. Physicist and critical AI researcher Dan McQuillan has described AI as 'pattern-finding' tech. For example, to create an LLM such as ChatGPT, you'd start with an enormous quantity of text, then do a lot of computationally-intense statistical analysis to map out which words and phrases are most likely to appear near to one another. Crunch the numbers long enough, and you end up with something similar to the next-word prediction tool in your phone's text messaging app, except that this tool can generate whole paragraphs of mostly plausible-sounding word salad. <https://www.danmcquillan.org/> What's important to note here is that the machine's outputs are solely based on patterns of statistical correlation. The AI doesn't have an understanding of context, meaning, or causation. The system doesn't 'think' or 'know', it just mimics the appearance of human communication. That's all. Maybe the output is true, or maybe it's false; either way the system is behaving as designed. Automating bias =============== When an AI confidently recommends eating a deadly-poisonous mushroom, or summarizes text in a way that distorts its meaning--perhaps a research paper, or maybe one day an asylum claim--the consequences can range from bad to devastating. But the problems run deeper still: AI systems can't help but reflect the power structures, hierarchies, and biases present in their training data. A 2024 Stanford study found that the AI tools being deployed in elementary schools displayed a "shocking" degree of bias; one of the LLMs, for example, routinely created stories in which students with names like Jamal and Carlos would struggle with their homework, but were "saved" by a student named Sarah. <https://hai.stanford.edu/news/how-harmful-are-ais-biases-on-diverse- student-populations> As alarming as that is, at least those tools exhibit obvious bias. Other times it might not be so easy to tell. For instance, what happens when a system like this isn't writing a story, but is being asked a simple yes/no question about whether or not an organization should offer Jamal, Carlos, or Sarah a job interview? What happens to people's monthly premiums when a US health insurance company's AI finds a correlation between high asthma rates and home addresses in a certain Memphis zip code? In the tradition of skull-measuring eugenicists, AI provides a way to naturalize and reinforce existing social hierarchies, and automates their reproduction. <https://racismandtechnology.center/2024/04/01/ openais-gpt-sorts-resumes-with-a-racial-bias/> This is incredibly dangerous, particularly when it comes to embedding AI inside the public sector. Human administrators and decision-makers will invariably have biases and prejudices of their own, of course--but there are some important things to note about this. For one thing, a diverse team can approach decisions from multiple angles, helping to mitigate the effects of individual bias. An AI system, insofar as we can even say it 'approaches' a problem, does so from a single, culturally flattened and hegemonic perspective. Besides, biased human beings, unlike biased computers, are aware that we can be held accountable for our decisions, whether via formal legal means, professional standards bodies, or social pressure. Algorithmic systems can't feel those societal constraints, because they don't think or feel anything at all. But the AI industry continues to tell us that at some point, somehow, they will solve the so-called 'AI alignment problem', at which point we can trust their tools to make ethical, unbiased decisions. Whether it's even possible to solve this problem is still very much an open debate among experts, however. <https://www.aibiasconsensus.org/> Possible or not, we're told that in the meantime, we should always have human beings double-checking their systems' outputs. That might sound like a good solution, but in reality it opens a whole new can of worms. For one thing, there's the phenomenon of 'automation bias'--the tendency to rely on an automated system's result more than one's own judgement--something that affects people of all levels of skill and experience, and undercuts the notion that error and bias can be reliably addressed by having a 'human in the loop'. <https://pubs.rsna.org/doi/10.1148/radiol.222176> * * * Then there's the deskilling effect. Despite AI being touted as a way to 'boost productivity', researchers are consistently finding that these tools don't result in productivity gains. So why do people in positions of power continue to push for AI adoption? The logical answer is that they want an excuse to fire workers, and don't care about the quality of work being done. <https://www.project-syndicate.org/commentary/ai-productivity-boom- forecasts-countered-by-theory-and-data-by-daron-acemoglu-2024-05> This attack on labour becomes a self-reinforcing cycle. With a smaller team, workers get overloaded, and increasingly need to rely on whatever tools are at their disposal, even as those tools devalue their skills and expertise. This drives down wages, reduces bargaining power, and opens the door for further job cuts--and likely for privatization. <https://www.404media.co/microsoft-study-finds-ai-makes-human- cognition-atrophied-and-unprepared-3/> Worse still, it seems that the Canadian federal government is actively pursuing policy that could reinforce this abusive dynamic further; the 2024 Fall Economic Statement included a proposal that would, using public money, incentivize our public pension funds to invest in AI data centres to the tune of tens of billions of dollars. <https://budget.canada.ca/update-miseajour/2024/report-rapport/ chap2-en.html#catalyzing-ai-infrastructure> Suffocating the soul of the public service ========================================== I'd happily wager that when people choose careers in the public sector, they rarely do so out of narrow self-interest. Rather, they choose this work because they're mission-oriented: they want the opportunity to express care through their work by making a positive difference in people's lives. Often the job will entail making difficult decisions. But that's par for the course: a decision isn't difficult if the person making it doesn't care about doing the right thing. And here's where we start to get to the core of it all: human intelligence, whatever it is, definitely isn't reducible to just logic and abstract reasoning; feeling is a part of thinking too. The difficulty of a decision isn't merely a function of the number of data points involved in a calculation, it's also about understanding, through lived experience, how that decision will affect the people involved materially, psychologically, emotionally, socially. Feeling inner conflict or cognitive dissonance is a good thing, because it alerts us to an opportunity: it's in these moments that we're able to learn and grow, by working through an issue to find a resolution that expresses our desire to do good in the world. ========== REMAINDER OF ARTICLE TRUNCATED ==========