• vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    Thus, a user receives an answer that has already undergone a filtering of sorts.

    Wouldn’t this be an expected trait of a system predicting next most likely token based on lossy compression of specific datasets and other lossy optimization?

    • Eq0@literature.cafe
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Depends. For an expert, that is self evident (even if it might not be clear which biases have been incorporated). But that is not how it has been marketed. Chatgpt and similar are perceived as answering “the truth” at all times, and that skews the user’s understanding of the answers. Researching how deeply the answers are affected by the coders’ bias is the focus of their research and a worthwhile undertaking to avoid overlooking something important