This is again a big win on the red team at least for me. They developed a “fully open” 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) […]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B […].

As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other “fully open” models, coming next to open weight only models.

A step further, thank you AMD.

PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

  • 1rre@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 hours ago

    Every AI model outperforms every other model in the same weight class when you cherry pick the metrics… Although it’s always good to have more to choose from

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    10
    ·
    edit-2
    3 hours ago

    I don’t know why open sourcing malicious software is worthy of praise but okay.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        2 hours ago

        What’s malicious about AI and LLMs? Have you been living under a rock?

        At best it is useless, and at worst it is detrimental to society.

        • ZeroOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          So in a nutshell, it’s malicious because you said so

          Ok gotcha Mr/Ms/Mrs TechnoBigot

        • Domi@lemmy.secnd.me
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          I disagree, LLMs have been very helpful for me and I do not see how an open source AI model trained with open source datasets is detrimental to society.

  • TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    ·
    17 hours ago

    Properly open source.

    The model, the weighting, the dataset, etc. every part of this seems to be open. One of the very few models that comply with the Open Software Initiative’s definition of open source AI.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      17 hours ago

      Look at the picture in my post.

      There was others open models but they were very below the “fake” open source models like Gemma or Llama, but Instella is almost to the same level, great improvement

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      8 hours ago

      Instead of the traditional open models (like llama, qwen, gemma…) that are only open weight, this model says that it has :

      Fully open-source release of model weights, training hyperparameters, datasets, and code

      Making it different from other big tech “open” models. Tough it exists other “fully open” models like GPT neo, and more

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      Dont know if this test in a good representation of the two AI, but in this case it seems pretty promising, the only thing missing is a high parameters model

  • Zarxrax@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    16 hours ago

    And we are still waiting on the day when these models can actually be run on AMD GPUs without jumping through hoops.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      13 hours ago

      In other words, waiting for the day when antitrust law is properly applied against Nvidia’s monopolization of CUDA.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      That is a improvement, if the model is properly trained with rocm it should be able to run on amd GPU easier

  • BitsAndBites@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 hours ago

    Nice. Where do I find the memory requirements? I have an older 6GB GPU so I’ve been able to play around with some models in the past.

  • Canadian_Cabinet @lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    16 hours ago

    I know it’s not the point of the article but man that ai generated image looks bad. Like who approved that?