• 6 Posts
  • 286 Comments
Joined 2 years ago
cake
Cake day: August 11th, 2023

help-circle





  • areyouevenreal@lemm.eetoFediverse@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    6 months ago

    If right wing (or even other leftist groups) came into an explicitly tankie community and started arguing with people how would you react?

    Also do you actually know what tankies are? They aren’t the majority in any country I know of. Non-tankie doesn’t even mean right wing. Anarchists are further left than tankies.






  • I’ve tried making this argument before and people never seem to agree. I think Google claims their Kubernetes is actually more secure than traditional VMs, but how true that really is I have no idea. Unfortunately though there are already things we depend upon for security that are probably less secure than most container platforms, like ordinary unix permissions or technologies like AppArmour and SELinux.


  • Crypto is actually really useful for buying drugs and other illegal products and services. People have legitimately made a lot of money as well if they weren’t falling for the stupid scams. You should see the price of BitCoin or Ethereum these days.

    Not saying it’s ethical to run a lot of these given their limited usefulness and very high costs. But saying they didn’t make people money, were all scams, or didn’t have a use is objectively wrong.







  • That’s not true though. The models themselves are hella intensive to train. We already have open source programs to run LLMs at home, but they are limited to smaller open-weights models. Having a full ChatGPT model that can be run by any service provider or home server enthusiast would be a boon. It would certainly make my research more effective.


  • There is a lot that can be discussed in a philosophical debate. However, any 8 years old would be able to count how many letters are in a word. LLMs can’t reliably do that by virtue of how they work. This suggests me that it’s not just a model/training difference. Also evolution over million of years improved the “hardware” and the genetic material. Neither of this is compares to computing power or amount of data which is used to train LLMs.

    Actually humans have more computing power than is required to run an LLM. You have this backwards. LLMs are comparably a lot more efficient given how little computing power they need to run by comparison. Human brains as a piece of hardware are insanely high performance and energy efficient. I mean they include their own internal combustion engines and maintenance and security crew for fuck’s sake. Give me a human built computer that has that.

    Anyway, time will tell. Personally I think it’s possible to reach a general AI eventually, I simply don’t think the LLMs approach is the one leading there.

    I agree here. I do think though that LLMs are closer than you think. They do in fact have both attention and working memory, which is a large step forward. The fact they can only process one medium (only text) is a serious limitation though. Presumably a general purpose AI would ideally have the ability to process visual input, auditory input, text, and some other stuff like various sensor types. There are other model types though, some of which take in multi-modal input to make decisions like a self-driving car.

    I think a lot of people romanticize what humans are capable of while dismissing what machines can do. Especially with the processing power and efficiency limitations that come with the simple silicon based processors that current machines are made from.