

This is Uzumaki by Junji Ito but computers and stupid
Only Bayes Can Judge Me
This is Uzumaki by Junji Ito but computers and stupid
I think I mentioned Doughboys in this sack somewhere, so talking about CB is fair game
RE: (meta?)speculation about the quantum bubble. So the preceding two tech bubbles tried to worm their way into the arts: NFTs, and slop. How do we think the quantum hucksters are gonna try co-opt craativity?
KP writing a paper for the journal “New Frontiers In Gaslighting Children”
All I need to know about HDI (human-dolphin interaction) (read: fuckin) is covered in many episodes of my favorite podcast Doughboys
I haven’t clicked on any links here yet, this sounds like a bit, but because it’s LW I have to assume bad faith and that this is real.
E: lol real. Why wouldn’t they go for apes lol
Putting the “manic pixie” in manic pixie dream girl
Ah fuck. I didn’t click it cos I assumed it was the original tweet. I’ll just leave it there so you can all witness my crimes
Oh, looks like gemini is a fan of the hacky anti-comedy bits from some of my favorite podcasts
Lmao, piss-soaked fake nostalgia aside, what is even the point of this? How exactly is one supposed to go back to the 80’s? Is this an ad campaign for a toaster bath or something?
Yeah but will a ghost from pac-man tell me it loves me and get me to divorce my wife???*
*yes, actually. Pinky is moving in next month
altman is the waluigi to musk’s wario
You could fart into a mic and compromise a clanker, it seems.
also, they misspelled “Eliezer”, lol
DRAMATIS PERSONAE
Belligerents
Ah yes Basilisk’s Roko, the thought experiment where we simulate infinite AIs so that we can hurl insults at them
Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.
Quote from this post:
I found myself in a prolonged discussion with Mark Bishop, who was quite pessimistic about the capabilities of large language models. Drawing on his expertise in theory of mind, he adamantly claimed that LLMs do not understand anything – at least not according to a proper interpretation of the word “understand”. While Mark has clearly spent much more time thinking about this issue than I have, I found his remarks overly dismissive, and we did not see eye-to-eye.
Based on this I’d say the author is LLM-pilled at least.
However, a fruitful outcome of our discussion was his suggestion that I read John Searle’s original Chinese Room argument paper. Though I was familiar with the argument from its prominence in scientific and philosophical circles, I had never read the paper myself. I’m glad to have now done so, and I can report that it has profoundly influenced my thinking – but the details of that will be for another debate or blog post.
Best case scenario is that the author comes around to the stochastic parrot model of LLMs.
E: also from that post, rearranged slightly for readability here. (the […]* parts are swapped in the original)
My debate panel this year was a fiery one, a stark contrast to the tame one I had in 2023. I was joined by Jane Teller and Yanis Varoufakis to discuss the role of technology in autonomy and privacy. [[I was] the lone voice from a large tech company.]* I was interrupted by Yanis in my opening remarks, with claps from the audience raining down to reinforce his dissenting message. It was a largely tech-fearful gathering, with the other panelists and audience members concerned about the data harvesting performed by Big Tech and their ability to influence our decision-making. […]* I was perpetually in defense mode and received none of the applause that the others did.
So also author is tech-brained and not “tech-fearful”.
Smirk I’m in.