• Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    Been using Qwen 3.x for a while now for local LLM with search capability. The 3.5 and 3.6 ones are great and run very fast.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      3.6 27b is probably most powerful/efficient (to size) model out there. Qwen has a history of leveraging deepseek power as well. (deepseek creating small models with Qwen as the base), and Alibaba is main hosting service for deepseek. Alibaba/Qwen in talks to invest in Deepseek, atm.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        Yeah. The 80b Coder-Next runs at about the same speed on my hw too. I don’t know if it’s any better than 3.6 27b.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      19 hours ago

      I got Qwen 3.5 running on a Steam Deck.

      It ain’t exactly blazing fast, but it does actually work.

      (Reasonably fast if you go down to the 2B param model, I can get the 9B param variant working, though this makes Steam Decky very hot and bothered.)

      Yeah, you absolutely do not need Nvidia hardware to run an LLM, but we get blasted with their propoganda suggesting otherwise just all the time in the English speaking West.

      Because if you don’t need Nvidia, well, then, this whole AI bubble looks a lot more bubbly.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        14 hours ago

        Take good care of your hw! It’s not like 2 years ago when you could buy stuff off the shelf for reasonable prices. :D

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          My Steam Deck is my child.

          Maybe if I can get it to run a ‘good enough’ LLM, and also a robotics kinematics suite…

          I can just start building DOG, with a Steam Deck for a face, instead of a Combine scanner bot.

          • los0220@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            Gemma 4 seems nice for local usage, way faster than Qwen models.

            I was able to run 27B Gemma on my PC, where 14B Qwen was to slow due to CPU offload

            • percent@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              8 hours ago

              +1, exactly the same experience. Except Gemma4:26B really sucks with OpenCode. Works great with Pi though

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          Sorry, I’m not entirely sure what you mean.

          Did you mean to say:

          “And need to have the best consumer GPU on the market, to run an LLM.”

          … likely alluding to an RTX 5090?

          So you would be saying that basically it is bullshit, the idea that everyone needs extremely expensive hardware, to run an LLM?

          • Diurnambule@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 hours ago

            Hello, no sorry auto correction and going fast do it to my posts. I wanted to say that NVIDIA is already the worst option for consumer graphic card since AMD made a card with 20go ram which is able to run most open weight models.