Content Warning
https://prospect.org/power/2025-03-25-bubble-trouble-ai-threat/
But I want to have an aside on the level to which people uncritically use the term "foundation models" and discuss "reasoning" of these models, when it is very likely that the models literally memorized all these benchmarks. It truly is like the story of the emperor's clothes. Everyone seems to be in on it and you're the crazy one being like but HE HAS NO CLOTHES. 🧵
Content Warning
There are *many* problems with #AI. The biggest one, the one that subsumes the rest, is that it's *expensive*
And it's not even *good* at anything useful.
They call it "hallucination" but it's really sparkling #Fail. Any human doing a job with that level of fail would have been fired long ago.
Content Warning
Mitigating Bias in Machine Learning
Edited By @drcaberry
Brandeis Hill Marshall
„We dedicate this work to the diverse voices in #AI who work tirelessly to call out bias and work to mitigate it and advocate for #EthicalAI every day.
Some of the trailblazers doing the work are
@ruha9
@timnitGebru
@cfiesler
Joy Buolamwini
@ruchowdh
@safiyanoble
We also dedicate this work to the future engineers, scientists, and sociologists who will use it to inspire them to join the charge.“


Content Warning
#Technofeudalists and the perceived #AI threat incongruence
(1/n)
I'm surprised that you as an #AI and #TESCREAL expert See a discrepancy in this. For the morbid and haughty minds of the #Longtermists line #Elon, there is no discrepancy, IMHO:
1) At least since Goebbels, fascists offen acuse others of what they have done or are about to do themselves; or they deflect, flood the zone, etc. That on a tactical/communications strategy level.
2) More...
Content Warning
Anti-AI tools
Glaze
https://glaze.cs.uchicago.edu
Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style.
Nightshade
https://nightshade.cs.uchicago.edu/
Nightshade, a tool that turns any image into a data sample that is unsuitable for model training
HarmonyCloak
https://mosis.eecs.utk.edu/harmonycloak.html
HarmonyCloak is designed to protect musicians from the unauthorized exploitation of their work by generative AI models. At its core, HarmonyCloak functions by introducing imperceptible, error-minimizing noise into musical compositions.
Kudurru
https://kudurru.ai
Actively block AI scrapers from your website with Spawning's defense network
Nepenthes
https://zadzmo.org/code/nepenthes/
This is a tarpit intended to catch web crawlers. Specifically, it's targetting crawlers that scrape data for LLMs - but really, like the plants it is named after, it'll eat just about anything that finds it's way inside.
AI Labyrinth
https://blog.cloudflare.com/ai-labyrinth/
Today, we’re excited to announce AI Labyrinth, a new mitigation approach that uses AI-generated content to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives.
More tools, suggested by comments on this posts:
Anubis
https://xeiaso.net/blog/2025/anubis/
Anubis is a reverse proxy that requires browsers and bots to solve a proof-of-work challenge before they can access your site.
Iocaine
https://iocaine.madhouse-project.org
The goal of iocaine is to generate a stable, infinite maze of garbage.
Content Warning
Content Warning
From @pluralistichttps://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete
Content Warning
It's a real pleasure to listen to such a rich conversation on such diverse topics.
I especially liked how the topic of how the #AI industry labels people and methods was addressed.
It's the same for me, I've ended up assuming I'm a #DataScientist when I'm actually a #mechanical #engineer with a #PhD in #statistics. But the industry has decided that what I am is something I haven't studied about.
https://www.techwontsave.us/episode/267_ai_hype_enters_its_geopolitics_era_w_timnit_gebru
Content Warning
https://techwontsave.us/episode/267_ai_hype_enters_its_geopolitics_era_w_timnit_gebru
Content Warning
On #TechWontSaveUs, I spoke with @timnitGebru to discuss the state of AI in 2025 and how it continues to sustain the hype.
Listen to the full episode: https://techwontsave.us/episode/267_ai_hype_enters_its_geopolitics_era_w_timnit_gebru
Content Warning
📉 A work of art indeed—Tesla’s AI narrative is finally meeting reality.
Tech leaders can’t hide behind hype forever. The market is watching, and so are we.
AI is not a shortcut to profit—it’s a responsibility. Time to start treating it that way.
Content Warning
Content Warning
Content Warning
https://arstechnica.com/science/2025/03/ai-versus-the-brain-and-the-race-for-general-intelligence/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
A #longread by @arstechnica about the #anthropocentric #bias of #AGI and #AI companies - money versus nature.
#brain #science #moreThanHuman #animals #intelligence #anthropocene #nature
Content Warning
Content Warning
Content Warning
#RakietaFalcon9#PolskaAgencjaKosmiczna #polska #poland #falcon9#Technology #ai #news #nasa#Mastodon #fediverse #feditech #bluesky
Content Warning
BDG news: AI: Japanese-Developed “BuddhaBot Plus” to Debut in Bhutan
🔗 Read here: https://tinyurl.com/3da62wfb
#Buddhism #Buddha #Bhutan #Japan #AI #Buddhadharma #KyotoUniversity
"By looking at where power is being generated & the size of nearby populations, the researchers estimated the number of adverse health events that would likely be caused by AI-related air pollution."
Content Warning
Rehak, R. (2024) On the (im)possibility of sustainable artificial intelligence. In: Züger, T. & Asghari, H. (2024) AI systems for the public interest. Internet Policy Review, 13(3). https://policyreview.info/articles/news/impossibility-sustainable-artificial-intelligence/1804
Anyways, thanks for your great and inspiring work. :)
1. The data it was trained and evaluated on
2. The code
3. The model architecture
4. The model weights.
DeepSeek only gives 3, 4. And I'll see the day that anyone gives us #1 without being forced to do so, because all of them are stealing data.
Content Warning
Quick question have you seen this initiative "European Open-Source AI index"?
By @dingemansemark & @andreasliesenfeld from @Radboud_uni
Looks good to me, to help people determine how open an AI model actually is. Are you aware of other initiatives like this?
I'd like to gather these initiatives and share it with @publicspaces so more people learn what to look for to determine if an AI is truly opensource.