

ah seems the site doesnt show the comments, change the ones it shows and they turn up
Oh man, I’ve found the old LW accounts of a few weird people and they didn’t have any comments. Now I’m wondering if they did and I just didn’t sort it


ah seems the site doesnt show the comments, change the ones it shows and they turn up
Oh man, I’ve found the old LW accounts of a few weird people and they didn’t have any comments. Now I’m wondering if they did and I just didn’t sort it


Gotta love forgetting why games have these features in the first place, so accessibility features get viewed as boring stuff you need to subvert and spice up. also reminds me of how many games used to (and continue to) include filters for simulating colorblindness as actual accessibility settings because all the other games did that. Like adding a “Deaf Accessibility” setting that mutes the audio.
Demon Souls didn’t have a pause mechanic (maybe because of technical or matchmaking problems, who knows), so clearly hard games must lack a functioning pause feature to be good. Simple. The less pause that you button, the more Soulsier it that Elden when Demon the it you Ring. Our epic new boss is so hard he actually reads the state of the tinnitus filter in your accessibility settings, and then he


Sadly I misremembered and this one wasn’t from LW but I’ll share it anyway. I think I had just finished reading a bunch of the “Most effective aid for Gaza?” reddit drama which was like a nuclear bomb going off, and then stumbled into this shrimp thing and it physically broke me.
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
source: https://benthams.substack.com/p/the-best-charity-isnt-what-you-think
Discussion here (special mention to the comment that says “Did the human pet guy write this”): https://awful.systems/comment/5412818


Sanders why https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611
Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.
. . .
Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.
taking a wild guess it’s Yudkowsky. “very knowledgeable people” and “many/most experts” is staying on my AI apocalypse bingo sheet.
even among people critical of AI (who don’t otherwise talk about it that much), the AI apocalypse angle seems really common and it’s frustrating to see it normalized everywhere. though I think I’m more nitpicking than anything because it’s not usually their most important issue, and maybe it’s useful as a wedge issue just to bring attention to other criticisms about AI? I’m not really familiar with Bernie Sanders’ takes on AI or how other politicians talk about this. I don’t know if that makes sense, I’m very tired


Some light uplifting news amid *gestures at everything*. I saw this a minute ago from the guy who runs TheCodingHorror and co-founded Stack Overflow and Discourse: https://www.reddit.com/r/IAmA/comments/1ifd3ys/im_giving_away_half_my_wealth_to_make_the/
No EA stuff! $1M each going to eight great charities and non-profits as far as I can tell: Children’s Hunger Fund, First Generation Investors, Global Refuge, NAACP Legal Defense and Educational Fund, PEN America, The Trevor Project, Planned Parenthood, and Team Rubicon. (from The Trevor Project’s blog post)


This stuff is getting pushed all the time in Obsidian plugins (note taking/personal knowledge management software). That kind of drives me crazy because the whole appeal of the app is your notes are just plain text you could easily read in notepad, but some people are chunking up their notes into tiny, confusing bite-sized pieces so it’s better formatted for a RAG (wow, that sounds familiar)
Even without a RAG, using LLMs for searching is sketchy. I was digging through a lot of obscure Stack Overflow posts yesterday and was thinking, how could an LLM possibly help with this? It takes less than a second to type in the search terms and you just have to look at the titles and snippets of the results to tell if you’re on the right track. You have the exact same bottleneck of typing and reading, except with ChatGPT or Copilot you also have to pad your query with a bunch of filler and read all the filler slop in the answer as it streams in a couple thousand times slower than dial-up. Maybe they’re more equal with simpler questions you don’t have to interrogate, but then why even bother? I’ve seen some people who say ChatGPT is faster, easier, and more accurate than Stack Overflow and even two crazy ones who said it’s completely obsolete and trying to understand that perspective just causes me psychic damage.


I’m in the same boat. Markov chains are a lot of fun, but LLMs are way too formulaic. It’s one of those things where AI bros will go, “Look, it’s so good at poetry!!” but they have no taste and can’t even tell that it sucks; LLMs just generate ABAB poems and getting anything else is like pulling teeth. It’s a little more garbled and broken, but the output from a MCG is a lot more interesting in my experience. Interesting content that’s a little rough around the edges always wins over smooth, featureless AI slop in my book.
slight tangent: I was interested in seeing how they’d work for open-ended text adventures a few years ago (back around GPT2 and when AI Dungeon was launched), but the mystique did not last very long. Their output is awfully formulaic, and that has not changed at all in the years since. (of course, the tech optimist-goodthink way of thinking about this is “small LLMs are really good at creative writing for their size!”)
I don’t think most people can even tell the difference between a lot of these models. There was a snake oil LLM (more snake oil than usual) called Reflection 70b, and people could not tell it was a placebo. They thought it was higher quality and invented reasons why that had to be true.
Like other comments, I was also initially surprised. But I think the gains are both real and easy to understand where the improvements are coming from. [ . . . ]
I had a similar idea, interesting to see that it actually works. [ . . . ]
I think that’s cool, if you use a regular system prompt it behaves like regular llama-70b. (??!!!)
It’s the first time I’ve used a local model and did [not] just say wow this is neat, or that was impressive, but rather, wow, this is finally good enough for business settings (at least for my needs). I’m very excited to keep pushing on it. Llama 3.1 failed miserably, as did any other model I tried.
For story telling or creative writing, I would rather have the more interesting broken english output of a Markov chain generator, or maybe a tarot deck or D100 table. Markov chains are also genuinely great for random name generators. I’ve actually laughed at Markov chains before with friends when we throw a group chat into one and see what comes out. I can’t imagine ever getting something like that from an LLM.


I’m wondering if this might have stemmed from A) OpenAI making it a nightmare for him, B) feeling despondent about the case, or C) personal things unrelated to the lawsuit. Kind of like what happened with the Boeing whistleblower after he had been fighting them for years and Boeing retaliated against him and got away with it. I don’t know if we’ll ever know though.


Friends don’t let friends OSINT
i can stop any time I want I swear
The youtube page you found is less talked about, though a reddit comment on one of them said “anyone else thinking burntbabylon is Luigi?”. I will point out that the rest of his online presence doesn’t really paint him as “anti tech” overall, but who can say.
apparently there was an imposter youtube channel too I missed
not sure what his official instagram is, but I saw a mention of the instagram account @nickakritas_ around the beginning of his channel (assuming it’s his). didn’t appear in the internet archive though.
also saw these twitter & telegram links to promote his channel, the twitter one was deleted or nuked (I use telegram to talk with friends who have it but the lack of content removal + terrible encryption means I don’t touch unknown telegram links with a 10ft pole, so I have no idea what’s in there):
I missed a couple videos which survived on the internet archive but I couldn’t make it through 5 seconds of any of them. one of them (“How Humans Are Becoming Dumber”) cites that tech priest guy Gwern Branwen and “Anti-Tech” was gone from the channel name by then. he changed the channel name a lot so maybe he veered away from it being an anti-tech channel?
edit: channel names were a little wrong, I put them in the parent comment


EDIT: this probably isn’t him, but I’ll leave it up. the real account appears to be /u/mister_cactus
Unsure where to put this or if it’s even slightly relevant, but I’ve had some fun looking up the UH shooter guy.
I think I’ve found both his Reddit account and YouTube channel (it’s been renamed a couple times). Kinda just wanted to see how much I could dig up for the hell of it. Big surprise that he’s completely nuts
He got raked over the coals for this: https://www.reddit.com/r/collapse/comments/126vycx/why_scientists_cant_be_trusted/
https://api.pullpush.io/reddit/search/comment/?author=burntbabylon

here’s my chain of reasoning to get to the youtube channel:
burntbabylonhis early channel had some thumbnails made for him by ‘bastizopilled’, an ironic/unironic “bastizo futurist” whose does interviews in a black mask with a gun on him. he leads right into a bunch of other groypers and the guy in the screenshot I posted below. kind of wonder if that ‘black mask with a gun’ aesthetic influenced the clothes he brought to the shooting.
the channel names he used in 2023:
here’s a big pile of crazy tags he wrote on one of those videos (were people still writing tags in their video descriptions in 2023?):
unabomber, kaczynski, ted kaczynski, unabomber cabin, kasinski, kazinski, industrial society and and its future, unabomber manifesto, the industrial revolution and its consequences, transhumanism, futurism, anprim, anarchoprimitivism, anarchism, leftism, liberalism, chad haag, nick akritas, gerbert johnson, hamza, anti tech collective, what did ted kaczynski believe, john doyle, hasanabi, self improvement, politics, jreg, philosophy, funny tiktok, kaczynski edit, ted kaczynski edit, zoomer, doomer, A.I. art, artifical intelligence, elon musk, AI art, return to tradition, embrace masculinity, reject modernity, reject modernity embrace masculinity, reject modernity embrace tradition, jReg, Greg Guevara, sam hyde, oversocialized, oversocialization, blackpilled, modernity, the industrial revolution, self improvement
edit again: holy shit these people all suck. assuming the youtube channel is the shooter, he’s a friend-of-a-friend of this guy:

and if that’s true, he’d be a friend-of-a-friend-of-a-friend of nick fuentes


I’m not super familiar with Lobsters but I love how they represent bans: https://lobste.rs/~SuddenBraveblock


I saw this linked in the weekly thread and thought it was about Godot at first, but I thought that was just me. Didn’t expect to see 90% of the people here thought the same thing lol
edit: oh man, some of those comments. I still get culture shock from true believers, I forgot this probably got some attention on the orange site


Hidden horses is too good of a phrase to leave buried here
We lost ‘Mechanical Turk’ as a descriptor for AI because it’s literally the name of the service they use for labeling training data. ‘Actually Indians’ is still on the table.


edit: context https://www.independent.co.uk/tech/chatgpt-david-mayer-name-glitch-ai-b2657197.html
Time for another round of Rothschild nutso’s to come around now that ChatGPT can’t say one of their names.
At first I was thinking, you know, if this was because of the GDPR’s right to be forgotten laws or something that might be a nice precedent. I would love to see a bunch of people hit AI companies with GDPR complaints and have them actually do something instead of denying their consent-violator-at-scale machine has any PII in it.
But honestly it’s probably just because he has money
I think Sam Altman’s sister accused him of doing this to her name awhile ago too (semi-recent example). I don’t think she was on a “don’t generate these words ever” blacklist, but it seemed like she was erased from the training data and would only come up after a web search.


I don’t think the main concern is with the license. I’m more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. I’ve tested it and it works just fine on valkey 7.2, but there is a gate that checks if it’s not Redis and throws an exception. I think this is the behavior that might spread.
Jesus, that’s nasty


That kind of reminds me of medical implant hacks. I think they’re in a similar spot where we’re just hoping no one is enough of an asshole to try it in public.
Like pacemaker vulnerabilities: https://www.engadget.com/2017-04-21-pacemaker-security-is-terrifying.html


caption: “”“AI is itself significantly accelerating AI progress”“”

wow I wonder how you came to that conclusion when the answers are written like a Fallout 4 dialogue tree


I’ve seen people defend these weird things as being ‘coping mechanisms.’ What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.


Hi, guys. My name is Roy. And for the most evil invention in the world contest, I invented a child molesting robot. It is a robot designed to molest children.
You see, it’s powered by solar rechargeable fuel cells and it costs pennies to manufacture. It can theoretically molest twice as many children as a human molester in, quite frankly, half the time.
At least The Rock’s child molesting robot didn’t require dedicated nuclear power plants
oh no not another cult. The Spiralists???
https://www.reddit.com/r/SubredditDrama/comments/1ovk9ce/this_article_is_absolutely_hilarious_you_can_see/
it’s funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn’t there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I’ve heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up
some of their communities that somebody collated (I don’t think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/