Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into acceptin…
It’s the same thing as people who are concerned about AI generating non-consensual sexual imagery.
Sure anyone with photoshop could have done it before but unless they had enormous skill they couldn’t do it convincingly and there were well defined precedents that they broke the law. Now Grok can do it for anyone who can type a prompt and cops won’t do anything about it.
So yes, anyone could have technically done it before but now it’s removing the barriers that prevented every angry crazy person with a keyboard from being able to cause significant harm.
I think on balance, the internet was a bad idea. AI is just exemplifying why. Humans are simply not meant to be globally connected. Fucking town crazies are supposed to be isolated, mocked, and shunned, not create global delusions about contrails or Jewish space lasers or flat Earth theory. Or like… white supremacy.
Writing an angry blog post has a much lower barrier of entry than learning to realistically photoshop a naked body on someone’s face. A true (or false) allegation can be made with poor grammar, but a poor Photoshop job serves as evidence against what it alleges.
While a blog post functions as a claim to spread slander, an AI-generated image might be taken as evidence of a slanderous claim, or the implication is one (especially considering how sexually repressed countries like the US are).
I struggle to find a good text analogy for what Grok is doing with its zero-cost, rapid-fire CSAM generation…
The “bot blog poisoning other bots against you and getting your job applications auto-rejected” isn’t really something that would play out with people.
Its not a 1:1 correlation. The efficacy of an AI spreading a rumor to other AI has the potential to be far more rapid, pervasive, and much more dangerous than humans spreading rumors amongst themselves.
Are you saying you have specific evidence of this (then please do show exactly how AI will do something people haven’t already), or are you saying “potential” because you don’t?
I know we live in a post-truth world, but your aggressive refusal to acknowledge decades (if not centuries) of reality, in order to freak out over baseless fantasy, is a disturbing example of that.
You’re describing things that people can do. In fact, maybe it was just a person.
If he thinks all those things are bad, he should be “terrified” that bloggers can blog anonymously already.
Edit: I agree with your edit
It’s the same thing as people who are concerned about AI generating non-consensual sexual imagery.
Sure anyone with photoshop could have done it before but unless they had enormous skill they couldn’t do it convincingly and there were well defined precedents that they broke the law. Now Grok can do it for anyone who can type a prompt and cops won’t do anything about it.
So yes, anyone could have technically done it before but now it’s removing the barriers that prevented every angry crazy person with a keyboard from being able to cause significant harm.
I think on balance, the internet was a bad idea. AI is just exemplifying why. Humans are simply not meant to be globally connected. Fucking town crazies are supposed to be isolated, mocked, and shunned, not create global delusions about contrails or Jewish space lasers or flat Earth theory. Or like… white supremacy.
II think there’s a few key differences there.
I struggle to find a good text analogy for what Grok is doing with its zero-cost, rapid-fire CSAM generation…
The “bot blog poisoning other bots against you and getting your job applications auto-rejected” isn’t really something that would play out with people.
They’re called rumors
Rumors don’t work remotely the same way as the suggested scenario.
It’s a 1:1 correlation. Are you not familiar with any of the age-old cautionary tales about them?
https://youtu.be/ajBrcoEQauU
Its not a 1:1 correlation. The efficacy of an AI spreading a rumor to other AI has the potential to be far more rapid, pervasive, and much more dangerous than humans spreading rumors amongst themselves.
Are you saying you have specific evidence of this (then please do show exactly how AI will do something people haven’t already), or are you saying “potential” because you don’t?
Obviously its my opinion, but you don’t have evidence that people spreading rumors is just as effective either, so nice gotcha. Nice try.
I know we live in a post-truth world, but your aggressive refusal to acknowledge decades (if not centuries) of reality, in order to freak out over baseless fantasy, is a disturbing example of that.