• kadu@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    7 hours ago

    LLMs don’t have any awareness of their internal state, so there’s no way for them to see something as a gap of knowledge.

    • Doorknob@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      7 hours ago

      Took me ages to understand this. I’d thought "If an AI doesn’t know something, why not just say so?“

      The answer is: that wouldn’t make sense because an LLM doesn’t know ANYTHING

    • figjam@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      7 hours ago

      Wouldn’t it make sense for an ai to provide a confidence level though?

      I’ve got 3 million bits of info on this topic but only 4 of them lead to this solution. Confidence level =1.5%

      • kadu@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        7 hours ago

        It doesn’t have “3 million bits of info” on a specific topic, or even if it did, it wouldn’t be able to directly measure it. It’s worth reading a bit about how LLMs work behind the hood, because although somewhat dense if you’re new to the concepts, you come out knowing a lot more about what to expect when using them, what the limitations actually are and how to use them better if you decide to go that route.