Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    Was reading some science fiction from the 90’s and the AI/AGI said ‘im an analog computer, just like you, im actually really bad at math.’ And I wonder how much damage these one of these ideas (the other being there are computer types that can do more/different things. Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science) did.

    The idea that a smart computer will be worse at math (which makes sense from a storytelling perspective as a writer, because smart AI who also can do math super well is gonna be hard to write), which now leads people who read enough science fiction to see the machine that can’t count nor run doom and go ‘this is what they predicted!’.

    Not a sneer just a random thought.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science) did.

      Capabilities in the sense that they can compute problems that digital TMs cannot? No, they cannot Capabilities in the sense they’d be more efficient at computing some problems than digital TMs? Hard af to prove or disprove.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Yeah in the complexity theory sense. I think it was already proven that the … shit sorry can’t find the correct words… how analog number storage can have arbitrary precision (?? not sure if that is the correct way to describe it) provide no benefit over digital ones due to various factors. Sorry if it is vague, it has been a long time I ago I really learned about this stuff.

    • Don Piano@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      If you were to create a humanlike artificial intelligence (in the sense of a whole cognitive apparatus for learning, emotions, motivation etc etc; not in the sense of a chatbot), you would basically just create a guy. Worse: a baby, with the potential of becoming a guy. Generality requires tradeoffs in specificity and such. The smaller and more tightly circumscribable the types of things an intelligent system is supposed to handle are, the more right, fast, etc. the system can be. Wire a button to a bell and the bell’s intelligence will accurately ring the bell when the button is pressed. Create a human child and your resulting intelligence has the full range from Gustav Fechner to someone who desperately wants to impress Elizier Yudkowski. It can potentially tackle any sort of problem, but none necessarily well.

      That’s why the “rationalist” fictions about Bayesian superintelligences miss the mark, too. A Bayesian intelligence will use the result of tempering prior information with new information. If they are right in their starting point, then it’s not that impressive if they are right afterwards. If their priors are fucked, theyd conclude bullshit like a misinformed human would. Garbage in, garbage out

      But yeah; AGI, ignoring that chatbots won’t ever be that, would be just some guy.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      It’s because of research in the mid-80s leading to Moravec’s paradox — sensorimotor stuff takes more neurons than basic maths — and Sharp’s 1983 international release of the PC-1401, the first modern pocket computer, along with everybody suddenly learning about Piaget’s research with children. By the end of the 80s, AI research had accepted that the difficulty with basic arithmetic tasks must be in learning simple circuitry which expresses those tasks; actually performing the arithmetic is easy, but discovering a working circuit can’t be done without some sort of process that reduces intermediate circuits, so the effort must also be recursive in the sense that there are meta-circuits which also express those tasks. This seemed to line up with how children learn arithmetic: a child first learns to add by counting piles, then by abstracting to symbols, then by internalizing addition tables, and finally by specializing some brain structures to intuitively make leaps of addition. But sometimes these steps result in wrong intuition, and so a human-like brain-like computer will also sometimes be wrong about arithmetic too.

      As usual, this is unproblematic when applied to understanding humans or computation, but not a reasonable basis for designing a product. Who would pay for wrong arithmetic when they could pay for a Sharp or Casio instead?

      Bonus: Everybody in the industry knew how many transistors were in Casio and Sharp’s products. Moravec’s paradox can be numerically estimated. Moore’s law gives an estimate for how many transistors can be fit onto a chip. This is why so much sci-fi of the 80s and 90s suggests that we will have a robotics breakthrough around 2020. We didn’t actually get the breakthrough IMO; Moravec’s paradox is mostly about kinematics and moving a robot around in the world, and we are still using the same kinematic paradigms from the 80s. But this is why bros think that scaling is so important.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Could be, not sure the science fiction authors thought this much about it. (Or if the thing I was musing about is even real and not just a coincidence that I read a few works in which it is a thing). Certainly seems likely that this sort of science is where the idea came from.

        Moravec’s Paradox

        Had totally forgotten the name of that (Being better at remembering random meme stuff but not names of concepts like this, or a lot of names in general is a curse, also a source of imposter syndrome). But I recall having read the wikipedia page of that before. (Moravec also was the guy who thought of bush robots, wonder if that idea survived the more recent developments of nanotechnology.

        Rodney brooks wiki page on AI was amusing

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 days ago

      Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science

      The general idea among computer scientists is that analog TMs are not more powerful than digital TMs. The supposed advantage of an analog machine is that it can store real numbers that vary continuously while digital machines can only store discrete values, and a real number would require an infinite number of discrete values to simulate. However, each real number “stored” by an analog machine can only be measured up to a certain precision, due to noise, quantum effects, or just the fact that nothing is infinitely precise in real life. So, in any reasonable model of analog machines, a digital machine can simulate an analog value just fine by using enough precision.

      There aren’t many formal proofs that digital and analog are equivalent, since any such proof would depend on exactly how you model an analog machine. Here is one example.

      Quantum computers are in fact (believed to be) more powerful than classical digital TMs in terms of efficiency, but the reasons for why they are more powerful are not easy to explain without a fair bit of math. This causes techbros to get some interesting ideas on what they think quantum computers are capable of. I’ve seen enough nonsense about quantum machine learning for a lifetime. Also, there is the issue of when practical quantum computers will be built.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Quantum computers are in fact (believed to be) more powerful than classical digital TMs in terms of efficiency,

        Importantly though, not crazily so. We know they can do factorisation quick, and we believe classical cannot. But we also believe they can’t quickly solve NP-hard problems.

        (In each instance, believe means it’s not proven, but the implications of it being false would be so weird and surprising we think it’s probably true and are trying to prove it so)

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 days ago

        Thanks. I know some complexity theory, but not enough. (Enough to know it wasn’t gonna be my thing).

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 days ago

      this is one of those things that’s, in a narrative sense, a great way to tell a story, while being completely untethered from fact/reality. and that’s fine! stories have no obligation to be based in fact!

      to put a very mild armchair analysis about it forward: it’s playing on the definition of the conceptual “smart” computer, as it relates to human experience. there’s been a couple of other things in recent history that I can think of that hit similar or related notes (M3GAN, the whole “omg the AI tricked us (and then the different species with a different neurotype and capability noticed it!)” arc in ST:DIS, the last few Mission Impossible films, etc). it’s one of those ways in which art and stories tend to express “grappling with $x to make sense of it”

      The idea that a smart computer will be worse at math (which makes sense from a storytelling perspective as a writer, because smart AI who also can do math super well is gonna be hard to write)

      personally speaking, one of the ways about it that I find most jarring is when the fantastical vastly outweighs anything else purely for narrative reasons - so much so that it’s a 4th-wallbreak for me ito what the story means to convey. I reflect on this somewhat regularly, as it’s a rather cursed rabbithole that instances repeatedly: “is it my knowledge of this domain that’s spoiling my enjoyment of this thing, or is the story simply badly written?” is the question that comes up, and it’s surprisingly varied and complicated in its answering

      on the whole I think it’s often good/best to keep in mind that scifi is often an exploration and a pressure valve, but that it’s also worth keeping an eye on how much it’s a pressure valve. too much of the latter, and something™ is up

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        Ow yeah the way it used in this story also made sense but not in a computer science way. Just felt a bit how Gibson famously had never used a modem before he wrote his cyberpunk series.

        • Charlie Stross@wandering.shop
          link
          fedilink
          arrow-up
          5
          ·
          7 days ago

          @Soyweiser @techtakes You misremembered: Gibson wrote his early stories and Neuromancer on a typewriter, he didn’t own a computer until he bought one with the royalties (an Apple IIc, which then freaked him out by making graunching noises at first—he had no idea it needed a floopy disk inserting).

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 days ago

            Thanks! I should have looked up the whole quote, but I just made a quick reply I knew I had worded it badly and I had it wrong, but just didn’t do anything about it. My bad.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      7 days ago

      This isn’t an idea that I’ve heard of until you mentioned it, so it likely hasn’t got much purchase in the public consciousness. (Intuitively speaking, a computer which sucks at maths isn’t a good computer, let alone AGI material.)

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 days ago

        Yeah, I was also just wondering, as obv what I read is not really typical of the average public. Can’t think of any place where this idea spread in non-written science fiction for example, with an exception being the predictions of C-3PO, who always seems to be wrong. But he is intended as a comedic sidekick. (him being wrong can also be seen as just the lack of value in calculating odds like that, esp in a universe with The Force).

        But yes, not likely to be a big thing indeed.