• 0 Posts
  • 813 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle

  • The point is that clouds aren’t inherently bad, and actually come with a lot of important upsides; they’ve become bad because capital owns and exploits everything in our society, poisoning what should be a good idea. The author is arguing that while there’s nothing fundamentally wrong with self-hosting, it’s not really a solution, just a patch around the problem. Rather than seeking a kind of digital homesteading where our lives are reduced to isolated islands of whatever we personally can scratch from the land, we should be seeking a digital collectivism where communities, not exploitative corporations, own the digital landscape. Sieze the means of file-sharing, in effect.



  • The Nvidia Shield is still the best option for this. I’ve tried all kinds of homebrew solutions and always had headaches. In the two years I’ve had my Shield, I’ve never had a problem. Smart Tube Next lets me cast YouTube without ads, Kodi/Jellyfin gives me all my media library, plus I’ve got official apps for Nebula, Dropout and Spotify. Custom launcher removes what little amount of ads there were (and that was unobtrusive background banner stuff even at its worst). Plus the pro version can handle some pretty powerful emulators.



  • 1 hour old account with 1 post, submitting a very, very anti-Semitic blog post with zero sources.

    Can y’all maybe read the stuff you’re upvoting before you upvote it? Hard agree that Trump is covering shit up, but this ain’t it.

    (And to be clear, for those who can’t be bothered to just go look for themselves, this is the “Jews secretly run the world” anti-Semitism, not the the “How dare you correctly point out that Isreal is committing genocide” anti-Semitism).


  • I assume by “thinking engine” you mean “Reasoning AI”.

    Reasoning AI is just more bullshit. What happens is that they produce the output the way they always do - by guessing at a sequence of words that is statistically adjacent to the input they’re given - but then what they do is produce a randomly generated “Chain of thought” which is invented in the same way as the result; just pure statistical word association. Essentially they create the output the same way that a non-reasoning LLM does, then they give r themselves the prompt “Write a chain of thought for this output.” There’s a little extra stuff going on where they sort of check their own output, but in essence that’s just done by running the model multiple times and picking the output they converge on. So, just weighting the randomness, basically.

    I’m simplifying a lot here obviously, but that’s pretty much what’s going on.


  • Aren’t they processing high quality data from multiple sources?

    Here’s where the misunderstanding comes in, I think. And it’s not the high quality data or the multiple sources. It’s the “processing” part.

    It’s a natural human assumption to imagine that a thinking machine with access to a huge repository of data would have little trouble providing useful and correct answers. But the mistake here is in treating these things as thinking machines.

    That’s understandable. A multi-billion dollar propaganda machine has been set up to sell you that lie.

    In reality, LLMs are word prediction machines. They try to predict the words that would likely follow other words. They’re really quite good at it. The underlying technology is extremely impressive, allowing them to approximate human conversation in a way that is quite uncanny.

    But what you have to grasp is that you’re not interacting with something that thinks. There isn’t even an attempt to approximate a mind. Rather, what you have is a confabulation engine; a machine for producing plausible fictions. It does this by creating unbelievably huge matrices of words - literally operating in billions of dimensions at once, graphs with many times more axes than we have letters - and probabilistically associating them with each other. It’s all very clever, but what it produces is 100% fake, made up, totally invented.

    Now, because of the training data they’ve been fed, those made up answers will, depending on the question, sometimes ends up being right. For certain types of question they can actually be right quite a lot of the time. For other types of question, almost never. But the point is, they’re only ever right by accident. The “AI” is always, always constructing a fiction. That fiction just sometimes aligns with reality.



  • That doesn’t seem to bother OpenAI insiders, though, who hope to be bringing in $125 billion in annual revenue by 2029.

    To hit that kind of revenue they would need to convince 5% of the world’s population to spend $20 a month on a chatbot. Netflix has barely managed to reach about two thirds of that subscriber number, and they offer a whole-ass streaming service. Obviously OpenAI can supplement consumer sales with enterprise and API access, but so far they’re doing a very bad job of that.

    But even if they did hit those numbers, they’d still be running at a loss. By their own admission their product isn’t even profitable at $200 a month. More customers won’t make you more money when everything you sell is a loss leader.





  • My son has doubled in size every month for the last few months. At this rate he’ll be fifty foot tall by the time he’s seven years old.

    Yeah, it’s a stupid claim to make on the face of it. It also ignores practical realities. The first is those is training data, and the second is context windows. The idea that AI will successfully write a novel or code a large scale piece of software like a video game would require them to be able to hold that entire thing in their context window at once. Context windows are strongly tied to hardware usage, so scaling them to the point where they’re big enough for an entire novel may not ever be feasible (at least from a cost/benefit perspective).

    I think there’s also the issue of how you define “success” for the purpose of a study like this. The article claims that AI may one day write a novel, but how do you define “successfully” writing a novel? Is the goal here that one day we’ll have a machine that can produce algorithmically mediocre works of art? What’s the value in that?



  • The key difference being that AI is a much, much more expensive product to deliver than anything else on the web. Even compared to streaming video content, AI is orders of magnitude higher in terms of its cost to deliver.

    What this means is that providing AI on the model you’re describing is impossible. You simply cannot pack in enough advertising to make ChatGPT profitable. You can’t make enough from user data to be worth the operating costs.

    AI fundamentally does not work as a “free” product. Users need to be willing to pony up serious amounts of money for it. OpenAI have straight up said that even their most expensive subscriber tier operates at a loss.

    Maybe that would work, if you could sell it as a boutique product, something for only a very exclusive club of wealthy buyers. Only that model is also an immediate dead end, because the training costs to build a model are the same whether you make that model for 10 people or 10 billion, and those training costs are astronomical. To get any kind of return on investment these companies need to sell a very, very expensive product to a market that is far too narrow to support it.

    There’s no way to square this circle. Their bet was that AI would be so vital, so essential to every facet of our lives that everyone would be paying for it. They thought they had the new cellphone here; a $40/month subscription plan from almost every adult in the developed world. What they have instead is a product with zero path to profitability.




  • In the comments section of another, a TikTok user responded to a thread outlining the current administration’s anti-LGBTQ actions by saying, “None of that has anything to do with us being gay.”

    Compliance will not save you, you craven fucking cowards. They have never been quiet about how much they hate gay people, and the fact that they’re focused so heavily on attacking trans people right now is only because they want to pick off the weakest of the herd first.

    Bigotry is the enemy of all people. None of us is free until all of us are free.


  • I’m genuinely struggling to believe that you’re being anything other than intentionally disingenuous here, because it’s hard to imagine how anyone operating in good faith could manage to miss a point so completely and utterly.

    But on the off chance that you’re serious; the logic is that purpose has far more moral weight to it than means. Punching out a Nazi to save the black man he was trying to beat to death in the gutter is a morally good thing to do. Punching out a trans person because you’re a hateful bigot is a morally bad thing to do. Do I need to elaborate on that? I feel like I shouldn’t have to, but then it feels like I shouldn’t have to be explaining any of this.

    If you were in a sealed room with a thousand starving children, a padlocked shipping container full of food labelled “Property of Jeff Bezos”, and a set of bolt-cutters, what would you do? Because if the answer is anything other than “Break the lock open”, your entire moral system is completely and utterly fucked, and I do not know how to explain it to you any more plainly than that. If you actually believe that property rights are more important than human lives, then I honestly think you might need serious and extensive therapy to undo whatever damage has been done to you.