• 8 Posts
  • 468 Comments
Joined 2 years ago
cake
Cake day: August 15th, 2023

help-circle



  • The media (Blu-ray, dvd, whatever…) didn’t matter so much. Adding depth fields to existing media works, but it isn’t exactly perfect. The tech should be much better now, but it took a fuck ton of manual labor to convert films to be compatible with 3D. Back when 3D TVs were being pushed, studios had to film movies in 3D as well, which took more time and more equipment.

    Here is an old pic I took during the conversion of Titanic into 3D since it wasn’t filmed in 3D from the start. Each frame needed to have depth fields mapped, by hand, in a room filled with jr level staff. This work was split across multiple studios.


  • They still probably need a ton of customization and tuning at the driver level and beyond, which open source allows for.

    I am sure there is plenty of existing “super computer”-grade software in the wild already, but a majority of it probably needs quite a bit of hacking to get running smoothly on newer hardware configurations.

    As a matter of speculation, the engineers and scientists that build these things are probably hyper-picky about how some processes execute and need extreme flexibility.

    So, I would say it’s a combination of factors that make Linux a good choice.









  • Vyvanse wasn’t a pleasant experience for me. It felt like it crushed all of my dopamine receptors and life got really boring, really quick. (Obviously, this isn’t everyone’s experience, but it was mine.) It took a few weeks for my brain to recover.

    I didn’t try switching because I wanted to (adderall works just fine for me), it’s because the adderall supply was low in my area for a bit and I wanted it find an alternative.






  • These findings suggest LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply mimicking training data patterns

    lulzwut? LLMs aren’t internalizing jack shit. If they exhibit a bias, it’s because of how they were trained. A quick theory would be that the interwebs is packed to the brim with stories of “all in” behaviors intermixed with real strategy, fiction or otherwise. I speculate that there are more stories available in forums of people winning doing stupid shit then there are of people losing because of stupid shit.

    They exhibit human bias because they were trained on human data. If I told the LLM to only make strict probability based decisions favoring safety (and it didn’t “forget” context and ignored any kind of “reasoning”), the odds might be in its favor.

    Sorry, I will not read the study because of that one sentence in its summary.