• 0 Posts
  • 38 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)

    That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.

    In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.



  • While I’m sure the obvious systemic issues contribute to not looking for alternatives, that does sound like largely an issue inherent to optical pulse oximeters. Engineers aren’t miracle workers, they can’t change physics to their liking.

    I’m sure pulse oximeters now are more accurate than they were 20 years ago. The fact we’re still using them is because no alternatives have been found which are as easy to use, reliable, and non-invasive as pulse oximeters, even with the known downsides.



  • Canadian here. A minority gov is one which has less than X seats (where X is 50% in Canada and I believe Australia too), and usually that requires a coalition. “Forming government” in a parliamentary system like these basically means “has a good chance of passing meaningful legislation.” Since the leading party can’t do so alone, they form an agreement with another party (or multiple) to help them reach that criteria.

    It is entirely possible for the party with the most seats to also not form government, if they’re far enough below 50% and can secure no agreement with another party to push them across the line. In these situations, another general election would soon follow.




  • Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.

    Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.





  • Eranziel@lemmy.worldtoTechnology@lemmy.worldThe GPT Era Is Already Ending
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    8 months ago

    This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.

    I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.








  • You are making it far simpler than it actually is. Recognizing what a thing is is the essential first problem. Is that a child, a ball, a goose, a pothole, or a shadow that the cameras see? It would be absurd and an absolute show stopper if the car stopped for dark shadows.

    We take for granted the vast amount that the human brain does in this problem space. The system has to identify and categorize what it’s seeing, otherwise it’s useless.

    That leads to my actual opinion on the technology, which is that it’s going to be nearly impossible to have fully autonomous cars on roads as we know them. It’s fine if everything is normal, which is most of the time. But software can’t recognize and correctly react to the thousands of novel situations that can happen.

    They should be automating trains instead. (Oh wait, we pretty much did that already.)