• theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    1 day ago

    Nope! You don’t know what you’re talking about. At all. But you can have fun running a 1.6 trillion parameter model on CPU at basically 0 tokens per second at scale, MoE or not.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      You can actually get kind of acceptable performance on CPU alone, but you need rather specific CPUs, like SPR or newer Intel Xeons. These support AMX, which is almost like a mini tensor core, so you can actually get decent throughput in TFLOPs out of GNR Xeons. Memory bandwidth with max channels is also acceptable, something like ~800 GB/s per socket with maxed out MRDIMMs, which is not too far behind consumer GPUs like 3090 and 4090.

      Not anywhere near the performance of real GPUs of course, and not something acceptable for scale or production workloads, but good enough for local inference.

      • theunknownmuncher@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 day ago

        You’ve proved my point that you don’t know what you’re talking about by blindly linking to the git repo. Couldn’t find any source that supports your claim? I wonder why.

        Sure you can serve one request at a time to one patient user at a slow token per second rate, which makes running locally viable, but there is no RAM that has the bandwidth to run this model at scale. Even flash would be incredibly slow on CPU with multiple requests. You’d need the high bandwidth of VRAM and to run across multiple GPUs in a scalable way, it requires extremely high bandwidth interconnects between GPUs.

        • ag10n@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          1 day ago

          Thank you for proving my point. It can be run on a cpu

          “It’s slow, it’s inefficient” it still runs

          It’s a foundational model just like R1 was.

          • theunknownmuncher@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 day ago

            Yes, you can run it at scale.

            at scale

            Shift those goalposts! We went from “at scale” to “it still runs”

            • ag10n@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Quote me in full.

              You can run it at scale, on huawei. You can also run it on a cpu

              • theunknownmuncher@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 day ago

                Quote me in full.

                Okay!

                You can run at scale, on huawei. You can also run it on a cpu

                Yeah, that is absolutely not what you argued.

                Anyway, you’ve conceded that I’m correct that you cannot run it at scale on a CPU, because running on CPU is too slow and inefficient, and that they instead use GPU hardware like Huawei GPUs to run the model at scale. That’s good enough for me!

                • Diurnambule@jlai.lu
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  16 hours ago

                  Okey, then priced to just screenshot the part after the initial argument. Dude do more efforts.

                • ag10n@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  24 hours ago

                  Your interpretation of the English language has won you an argument! Huzzah

                  So good of you to concede it runs on cpu