• 0 Posts
  • 108 Comments
Joined 1 month ago
cake
Cake day: August 2nd, 2025

help-circle
  • We would all be plagued by a a giant beetle called the chewnifax that would drag children into the acid lake at night at random. The beetle would be so enormous and its armor so sturdy that any and all attempts to kill it fail, and it would remake the world, creating canyons and deep rivers and lakes as it made its way around the world, with the previously mentioned acid like increasing in size as it played with the bones of our children in its depth.

    We would live in giant mushrooms and communicate with tin cans on strings, and the world would no long be spherical but instead it would be a perfect cube.



  • Raccoon with “raccoon with rabies” tattooed on its side byes teen after teen shoves hand into its mouth. The teens mother, in condition of anonymity said "won’t somebody please think of the children? This clearly marked rabid was raccoons was caged in a secure research facility behind multiple layers of security and dozens of armed guard where anybody could access to it. The world needs to do more to protect the world from rabid raccoons, so I can continue to do what good parents do, drink wine and complain about how being a parent is’








  • The technology is fascinating and useful - for specific use cases and with an understanding of what it’s doing and what you can get out of it.

    From LLMs to diffusion models to GANs there are really, really interesting use cases, but the technology simply isn’t at the point where it makes any fucking sense to have it plugged into fucking everything.

    Leaving the questionable ethics many paid models’ creators have used to make their models aside, the backlash against so is understandable because it’s being shoehorned into places it just doesn’t belong.

    I think eventually we may “get there” with models that don’t make so many obvious errors in their output - in fact I think it’s inevitable it will happen eventually - but we are far from that.

    I do think that the “fuck ai” stance is shortsighted though, because of this. This is happening, it’s advancing quickly, and while gains on LLMs are diminishing we as a society really need to be having serious conversations about what things will look like when (and/or if, though I’m more inclined to believe it’s when) we have functional models that can are accurate in their output.

    When it actually makes sense to replace virtually every profession with ai (it doesn’t right now, not by a long shot) then how are we going to deal with this as a society?