• 0 Posts
  • 104 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle

  • Well usually I find people like answers to their questions, and am generally happy to help when I have those answers. It can also be interesting to run through the hypothetical scenario this sort of response would suggest. However, just because we can and did put the scenario in a framework of logic and see what that gave us, that doesn’t mean the original scenario is meant for it. Calling it illogical nonsense when it was never meant to be a genuine scenario is like calling a fish a horrible distance runner. You’re not wrong, but you’re missing the entire point.














  • Nah that means you can ask an LLM “is this real” and get a correct answer.

    That defeats the point of a bunch of kinds of material.

    Deepfakes, for instance. International espionage, propaganda, companies who want “real people”.

    A simple is_ai checkbox of any kind is undesirable, but those sources will end back up in every LLM, even one that was behaving and flagging its output.

    You’d need every LLM to do this, and there’s open source models, there’s foreign ones. And as has already been proven, you can’t rely on an LLM detecting a generated product without it.

    The correct way to do it would be to instead organize a not-ai certification for real content. But that would severely limit training data. It could happen once quantity of data isn’t the be-all end-all for a model, but I dunno when when or if that’ll be the case.


  • No, because there’s still no case.

    Law textbooks that taught an imaginary case would just get a lot of lawyers in trouble, because someone eventually will wanna read the whole case and will try to pull the actual case, not just a reference. Those cases aren’t susceptible to this because they’re essentially a historical record. It’s like the difference between a scan of the declaration of independence and a high school history book describing it. Only one of those things could be bullshitted by an LLM.

    Also applies to law schools. People do reference back to cases all the time, there’s an opposing lawyer, after all, who’d love a slam dunk win of “your honor, my opponent is actually full of shit and making everything up”. Any lawyer trained on imaginary material as if it were reality will just fail repeatedly.

    LLMs can deceive lawyers who don’t verify their work. Lawyers are in fact required to verify their work, and the ones that have been caught using LLMs are quite literally not doing their job. If that wasn’t the case, lawyers would make up cases themselves, they don’t need an LLM for that, but it doesn’t happen because it doesn’t work.





  • I used to feel that way, they didn’t have the depth I wanted.

    My wife has sent me so many tiktoks that I got used to it.

    Now I still don’t watch them, but because I’d get stuck in them. Whatever my wife sends, and specific ones from content people I know make quality, and that’s about it. Once you get past them being presented in a new way, they’re more addicting to ADHD brains.

    I will say that if you were gonna pick your minute long vertical video platform, tiktok is the best one, YouTube the worst, but Facebook and Instagram are a lot closer to YouTube shorts than tiktoks. I’m reasonably confident it’s because YouTube, Facebook, and Instagram see it as a way to extend your stay on a platform with other content, while tiktok focuses on it exclusively. Their algorithms are doing different things.