• 3 Posts
  • 1.16K Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle




  • I guess I can sort-of see where you’re coming from. Presumably when they’re bombing an airbase they’re trying to hit planes, destroy runways, etc. If you’re in the break room at the time there’s a decent chance you don’t die. If you’re working on a plane, you’re probably dead. But, when you sink a ship, everyone goes into the water and there’s a good chance they’ll die.

    To me, the fact that it happened nowhere near Iran is the bigger deal. It means that parts of the world that aren’t aligned with either side in the war now have to wonder what might explode in their own territory.

    OTOH, at least when you sink a military ship there won’t be civilian casualties. If the US had actually declared war on Iran, which of course never happened, but if… then another warship is actually a valid target. This isn’t like blowing up an apartment building because a guy on your kill list is in one of the apartments.




  • Was he a terrorist? Spy, sure. Torturer, sure. Assassin, yeah. But terrorist?

    Also, the important part is that he was exiled to Terok Nor. He wasn’t trustworthy, but he was no longer employed as a spy, torturer or assassin when we meet him. By then he was a tailor who was on the lookout for a way to improve his situation using all the tricks he’d learned from his previous life. I think people loved him because he was a believable character with a lot of depth to him.





  • I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.

    It’s a “known failure mode” of humans that they anthropomorphize things, that they spot patterns that aren’t actually there, that they assign agency when something is random, etc.

    An LLM is a machine designed specifically to produce plausible text. It analyzes billions of books and web pages to figure out the structure of language. Then it is given a bunch of text and it figures out what is likely to come next. It’s obvious what humans will do when exposed to something like that.

    Individual humans should be smart enough to say “We humans are flawed, I better approach this cautiously”. But, as a society we should also protect individual humans from themselves by making laws that prevent them from being preyed on.




  • On the one hand, these LLM companies really shouldn’t be foisting their beta technology on unwary users. If a Google employee couldn’t tell someone to kill themselves and get away with it, why is it that they get to absolve themselves of responsibility if the sentence is generated by an LLM?

    On the other hand, people in the future will look at the early LLM users (people who used it in the first few years) as complete idiots. It’s like the scientists who first studied radiation who just poked at radioactive things without understanding the danger. Or, like doctors who used to do surgery without washing their hands. They’ll hopefully understand that it was a new technology so we were dumb about it. But, they’ll still think that people were absolute idiots for feeding text into “spicy autocomplete” and then taking whatever it generated at face value.


  • Because it’s not possible.

    LLMs are just machines that generate text. The text they generate is text that is statistically likely to appear after the existing text. You can do “prompt engineering” all you want, but that will never work. All prompt engineering does is change the words that come earlier in the context window. If the system calculates that the most likely words to come next are “you should kill yourself” then that’s what it’s going to spit out.

    You could try putting a filter in there to prevent it from outputting specific words or specific phrases. But, language is incredibly malleable. The LLM could spit out thousands of different ways of saying “kill yourself”, and you can’t block them all. If you want to try to prevent it from expressing the concept of killing one’s self, you need something that can “comprehend” text… which at this point is just basically another version of the same kind of AI that generates the text, so that’s not going to work.