The problem isn’t the rise of “AI” but more so how we’re using it.
If a company wants to create a machine learning model that analyzes metrics on an automated production line and spits out parameters to improve the efficiency of their equipment, that’s a great use of the technology. We don’t need a LLM to produce a useless summary of what it thinks is a question when all I want is a page of search results.
AI is a tool. what matters is what humans do with the tool, not the tool itself
“LLMs don’t kill the climate, people do!”
I believe I’ve heard a similar argument before… 🤔
Guns are made to kill. When someone gets killed by a gun, that’s the gun being used for the thing’s primary intended purpose. They exist to cause serious harm. Causing damage is their entire reason for existing.
Nobody designed LLMs with the purpose of using up as much power as possible. If you want something like that, look at PoW crypto currencies, which were explicitly designed to be inefficient and wasteful.
Ahh, see, but the gun people don’t say it’s solely to kill. They say it’s “a tool”. I guess it could be for hunting, or skeet shooting, or target practice. One could argue that they get more out of owning a gun than just killing people.
But the result of gun ownership is also death where it wouldn’t have otherwise occurred. Yes, LLMs are a tool, but they also destroy the environment through enormous consumption of energy which is mostly created using non-renewable, polluting sources. Thus, LLM use is killing people, even if that’s not the intent.
But for that brief moment, we all got to laugh at it because it said to put glue on pizza.
All worth it!
There’s literally no point.
Like, humans aren’t really the “smartest” animals. We’re just the best at language and tool use. Other animals routinely demolish us in everythig else measured on an IQ test.
Pigeons get a bad rap at being stupid, but their brains are just different than ours. Their image and pattern recognition is so insane, they can recognize words they’ve never seen aren’t gibberish just by letter structure.
We weren’t even trying to get them to do it. They were just introducing new words and expected the pigeons to have to learn, but they could already tell despite never seeing that word before.
Why the hell are we jumping straight to human consciousness as a goal when we don’t even know what human consciousness is? It’s like picking up Elden Ring on whatever the final boss is for your very first time playing the game. Maybe you’ll eventually beat it. But why wouldn’t you just start from the beginning and work your way up as the game gets harder?
We should at least start with pigeons and get an artificial pigeon and work our way up.
Like, that old reddit repost about pigeon guided bombs, that wasn’t a hail Mary, it was incredibly effective.
Who’s jumping to human consciousness as a goal? LLMs aren’t human consciousness. The original post is demagoguery, but it’s not misrepresenting the mechanics. Chatbots already have more to do with your pigeons than with human consciousness.
I hate that the stupidity about AGI some of these techbros are spouting is being taken at face value by critics of the tech.
This is a strawman argument. AI is a tool. Like any tool, it’s used for negative things and positive things. Focusing on just the negative is disingenuous at best. And focusing on AI’s climate impact while completely ignoring the big picture is asinine (the oil industry knew they were the primary cause of climate change more than 60 years ago).
AI has many positive use-cases yet they are completely ignored by people who lack logic and rationality.
AI is helping physicists speed up experiments into supernovae to better understand the universe.
AI is helping doctors to expedite cancer screening rates.
AI is powering robots that can do the dishes.
AI is also helping to catch illegal fishing, tackle human trafficking, and track diseases.