Content Warning

I'm halfway through this article and I like the points made thus far, which I very much agree with.

https://prospect.org/power/2025-03-25-bubble-trouble-ai-threat/

But I want to have an aside on the level to which people uncritically use the term "foundation models" and discuss "reasoning" of these models, when it is very likely that the models literally memorized all these benchmarks. It truly is like the story of the emperor's clothes. Everyone seems to be in on it and you're the crazy one being like but HE HAS NO CLOTHES. 🧵

Content Warning

@timnitGebru To me, the key takeaway is that, even if you believe that the technology is somewhat useful -- heck, even if you believe that it's world-changing, which is far from clear -- the *economics* of the AI “industry” are largely nonsense.

So at *best*, it's the Dotcom Bubble all over again, and is likely to have the same end result, with a pop that wipes out vast amounts of money (and ruins a lot of innocent peoples' retirements).

+ -

Content Warning

@timnitGebru

There are *many* problems with #AI. The biggest one, the one that subsumes the rest, is that it's *expensive*

And it's not even *good* at anything useful.

They call it "hallucination" but it's really sparkling #Fail. Any human doing a job with that level of fail would have been fired long ago.

+ -

Content Warning

There is no difference between the likes of Stanford and any of these companies, they're one and the same. So schools like it make money from the hype and will perpetuate the hype.

The McKinseys and other huge consulting orgs are raking in bank on the hype, akin to all the people who made money during the gold rush, except for the people looking for gold.

All the things people call "laws" aren't laws and were never "laws". Scaling laws? Some people looked at some plots and came up with that.

+ -

Content Warning

"Emergence"? Take a look at this paper showing how that is nothing but hot air: .
https://proceedings.neurips.cc/paper_files/paper/2023/hash/adc98a266f45005c403b8311ca7e8bd7-Abstract-Conference.html

-"Reasoning"? Lets set aside how they don't even have a definition for this. But literally change some minor thing on the benchmarks like a number, and you see how these models completely fail. https://arxiv.org/pdf/2410.05229

-"Understanding"? Just watch this debate to see the rigor with which Emily discusses the topic vs those who make these wild claims: https://lnkd.in/e6bgM-43.

Content Warning

@timnitGebru Thanks for the links to the papers on emergence and on reasoning! I'll be sure to check them out. I had already seen the video of The Great Chatbot Debate. I went in to it assuming I'd agree with all or at least most points Prof. Bender (tagging: @emilymbender) made (and I did!), but I expected the other side to be an LLM true believer. Instead Sébastien Bubeck seemed to be a reasonable, mild-mannered AI researcher who also agreed with Prof. Bender on almost everything! 😀 It made for a less dramatic debate, I guess, but I do like it when people are sensible.

One question Bubeck made, Prof. Bender seemed to be disinclined to consider: yes, current LLMs cannot understand or reason, but might they still be useful for something? I wish she had spoken directly a little more directly to that question instead of ignoring it. I wondered if she would say something like: "it doesn't matter if they are useful or not, we shouldn't use them even if they are", and if so, what weight she might assign to the different arguments she mentioned in the debate which sound applicable here: (1) people misunderstand LLM, think they do reason and understand , and thus are highly likely to misuse them with dangerous consequences, (2) they were trained unethically, (3) they use way too much energy, far outweighing the benefit. Or maybe she doesn't believe they are useful for anything at all, which would also be interesting.

+ -

Content Warning

If you come up with a new benchmark they'll just guzzle it as part of the training data and then claim to do "reasoning" on that.

It is so mind-boggling to me that people have to even spend time debunking these claims. Such a waste of resources that could be going to doing actual science and engineering work.

Content Warning

@timnitGebru First, a huge personal thanks for raising the red flags on LLMs and the creepy TESCREAL ideology they're based on, long before most people realised it was an issue.

I know there's been a big professional and personal cost to that. People are listening, we really appreciate your work and advocacy.

Second, the LLM wave has to be one of the greatest misallocations of resources in human history.

We're in the middle of a climate crisis. There's massive housing shortfalls in many developed countries. Huge investments are needed into health, education, and public transport.

And there are far more critical things we could be researching that would unlock far more benefit to the public.

Heck, given the growing issue of disinformation, there'd arguably be far more public benefit from using these resources to make news journalism from reputable outlets free.

I don't see how investing billions into chasing a few billionaires' TESCREAL fantasies is the best use of our resources at this time.

+ -

Content Warning

"Marcus is so confident current approaches cannot take us to the promised land of AGI that he bet Anthropic CEO Dario Amodei $100,000 that AGI would not be achieved by the end of 2027."

Lol but he signed the ridiculous "pause AI" letter with Muskrat and other billionaires and said that DAIR "went for blood" when we wrote this statement against it.

https://www.dair-institute.org/blog/letter-statement-March2023/

+ -

Content Warning

The latest is that he's gone to my collaborators to threaten me with litigation, for, I suppose, writing verbatim what he wrote when attacking me unsummoned and unmentioned, whenever I raised awareness about the eugenicists in his circles.

Like the time he tried to get us to debate his advisor Steven Pinker and defended him when we said um no he's a race scientist...

Content Warning

@timnitGebru Grown crybullies seem to love making common cause with neo-nazis and threatening black women who are more competent than them. But I guess the fact that these pitiful kyriarchy bros can waste the world's time, resources, and skill with their profound stupidity is why things are the way they are right now

Content Warning

@timnitGebru yeesh. i just saw somebody on Bluesky approvingly quoting something Pinker had said and while it was making a reasonable point my reaction was ... why on earth are you quoting him, there are lots of other people who make similar points and *aren't* race scientists
Load 1 more replies