

If I had to list every single worldwide problem right now, Trump would be connected to at least 80% of that list


If I had to list every single worldwide problem right now, Trump would be connected to at least 80% of that list


the US Government slides further into technofascism via throwing chatbots into the military https://garymarcus.substack.com/p/code-red-for-humanity you don’t hate this administration enough


Literally everything I’ve heard about OpenClaw is it fucking up and being a security risk. A sensible person would realise this means it’s unreliable and shouldn’t be left to take care of important tasks (or really just tasks in general), but AI boosters are typically not sensible people. And the guy who made it got a job at OpenAI, so this is far from the end…


this is like the fourth time an AI agent has completely deleted something important (I remember an article about an AI deleting all of a scientists’s research) How many more times does it have to happen before people stop using AI to look after something important???


sharing this channel’s posts are the equivalent of shooting fish in a barrel but http://youtube.com/post/UgkxoSpDpLNEr9WawVXnl5Mlw4NeQ6-XsLjl this really just feels like an excuse to repost that METR graph. also wtf is the graph on top


it’s always the Elon Musk fans isnt it.
and on the topic of Futurism articles on Elon Musk: https://futurism.com/future-society/court-trouble-jury-hates-elon-musk
one word: LMFAOOOO


oh yeah I 100% agree that their methodology is flawed, and that blog does a pretty good job of outlining the issues. I just thought the absolutely huge gap was both interesting and funny. Their absolutely huge error bars are not a good sign, between that and the gap it really feels like someone screwed up


the metr graph has gotten weird https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/ the 50% success rate graph went from 6 hours to 14 hours, but the 80% success rate graph only went from 55 minutes to 1 hour and 3 minutes. I have an itch that it’s a fluke or outlier but it’s also very possible that LLM coding’s just weird like that


deleted by creator


the AI safety crowd cuts Anthropic way too much slack. Oh, they’re not running CSAM-generating MechaHitler? Oh, they’re not collaborating with the US government to recreate 1984? I’m so proud of them for doing the bare minimum. They still took donations from the UAE and Qatar (something Dario Amodei himself admitted was going to hurt a lot of people, but he took the donations anyways because “they couldn’t miss out on all those valuations”), they still downloaded hundreds of pirated content to train their chatbot. They’re still doing shady shit, don’t let them off the hook because they’re slightly less evil than the competition


Update: the screenshot is unfortunately not LLM generated, found the full version on Reddit SneerClub https://web4.ai/


New “AI is not a bubble” video just dropped https://youtu.be/wDBy2bUICQY a lot of skeptical comments pointing out the flaws in this argument while the creator tries to defend themselves with mostly mediocre lines


HOLY SHIT LMFAOOOOO


huh, you’re right. usually this channel provides a source for the things they share, but this time there’s nothing.


the full paper is here: https://x.com/alexwg/status/2022292731649777723 immediately two references to Nick Bostrom and Scott Alexander


can’t believe scammers are loosing their jobs to AI


Im pretty sure most of this has already been posted to this thread (I know the “AI published a hit piece on me” thing was)but more Moltbook/Openclaw/whatever-it’s-called nonsense
Latest batch of Anthropic nonsense dropped http://youtube.com/post/UgkxzQmoMujNPQ6rLLGfrXxI67pGVSJrpu9J