• acosmichippo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    8 months ago

    I think the battery system that’s best for everyone would be user-replaceable batteries. That way you can have an extra battery on hand to swap in as needed, or even extra-capacity batteries that make your phone a little thicker for people who are okay with that.

    Those of us who do actually prefer thinner, lighter phones can still have them (maybe with a slight increase in thickness to accommodate the attachment mechanisms). Plus bigger batteries are a huge waste of resources if the capacity isn’t going to be used.

  • x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    It feels like yesterday some guy was arguing against me here on Lemmy about my personal choice of wanting a longer battery life.

    WELL LOOK AT ME NOW BRO

      • WoodScientist@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        So here’s the path that you’re envisioning:

        1. Someone wants to send you a communication of some sort. They draft a series of bullet points or short version.

        2. They have an LLM elaborate it into a long-form email or report.

        3. They send the long-from to you.

        4. You receive it and have an LLM summarize the long-form into a short-form.

        5. You read the short form.

        Do you realize how stupid this whole process is? The LLM in step (2) cannot create new useful information from nothing. It is simply elaborating on the bullet points or short version of whatever was fed to it. It’s extrapolating and elaborating, and it is doing so in a lossy manner. Then in step (4), you go through ANOTHER lossy process. The LLM in step (4) is summarizing things, and it might be removing some of the original real information the human created in step (1), rather than the useless fluff the LLM in step (2) added.

        WHY NOT JUST HAVE THE PERSON DIRECTLY SEND YOU THE BULLET POINTS FROM STEP (1)???!!

        This is idiocy. Pure and simply idiocy. We send start with a series of bullet points, and we end with a series of bullet points, and it’s translated through two separate lossy translation matrices. And we pointlessly burn huge amounts of electricity in the process.

        This is fucking stupid. If no one is actually going to read the long-form communications, the long-form communications SHOULDN’T EXIST.

    • model_tar_gz@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      8 months ago

      No. Strictly and technically speaking, LLMs absolutely fall under the category of AI. You’re thinking of AGI, which is a subset of AI, and which LLMs will be a necessary but insufficient component of.

      I’m an AI Engineer; I’ve taken to, in my circles, calling AI “Algorithmic Intelligence” rather than “Artificial Intelligence.” It’s far more fitting term for what is happening. But until the Yanns and Ngs and Hintons of the field start calling it that, we’re stuck with it.