• pixxelkick@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    1 month ago

    Not at tremendously less of a power cost anyways. My laptop draws 35W

    5 minutes of GPT is genuinely less power consumption than several hours of my laptop being actively used to do the searching manually. Laptops burn non trivial amounts of power when in use. Anyone who has held a laptop on their lap can attest to the fact they aren’t exactly running cold.

    Hell even a whole day of using your mobile phone is non trivial in power consumption, they also use 8~10W or so.

    Using GPT for dumb shit is arguably unethical, but only in the sense that baking cookies in the oven is. You gonna go and start yelling at people for making cookies? Cooking up one batch of cookies burns WAAAY more energy than fucking around with GPT. And yet I don’t see people going around bashing people for using their ovens to cook things as a hobby.

    There’s no good argument against what I did, by all metrics it genuinely was the ethical choice.

    • jjjalljs@ttrpg.network
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      Client side power usage for conventional Internet search is about the same as chatgpt. I’m not sure why you’re talking about laptop power usage.

      Conventional search is less likely to lie, though.

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 month ago

        The power server side for 5 minutes of chatgpt, vs the power burned browsing the internet to find the info on my own (which would take hours to manually sift through)

        Thats the comparison.

        Even though server side power consumption to run GPT is very high, its not so high that its more than hours and hours of a laptop usage

        • jjjalljs@ttrpg.network
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          Oh, I see the point you’re making.

          I assumed that the information was there to be found, and a regular search would have returned it. Thus it would not have taken hours.

          Personally I don’t really trust the LLMs to synthesize disparate sources.

          • pixxelkick@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            1 month ago

            Personally I don’t really trust the LLMs to synthesize disparate sources.

            The #1 best use case for LLMs is using them as extremely powerful fuzzy searchers on very large datasets, so stuff like hunting down published papers on topics.

            Dont actually use their output as the basis for reasoning, but use it to find the original articles.

            For example, as a software dev, I use them often to search for the specific documentation for what I need. I then go look at the actual documentation, but the LLM is exceptionally fast at locating the document itself for me.

            Basically, using them as a powerful resource to look up and find resources is key, and was why I was able to find documentation on the symptoms of my pet so fast. It would have taken me ages to find those esoteric published papers on my own, there’s so much to sift through, especially when many papers cover huge amounts of info and what Im looking for is one small piece of info in that one paper.

            But with an LLM I can trim down the search space instantly to a way way smaller set, and then go through that by hand. Thousands of papers turn into a couple in a matter of seconds.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      Querying the LLM is not where the dangerous energy costs have ever been. It’s the cost of training the model in the first place.

      • pixxelkick@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 month ago

        The training costs effectively enter a “divide by infinity” argument given enough time.

        While they continue to train models at this time, eventually you hit a point where a given model can be used in perpetuity.

        Costs to train go down, whereas the usability of that model stretches on to effectively infinity.

        So you hit a point where you have a one time energy cost to make the model, and an infinite timescale to use it on.

        • Auth@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Costs to train are going up exponentially. In a few years corps are going to want a return on the investment and they’re going to squeeze consumers.