• panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    19 hours ago

    It won’t be long until the LLMs are trained with for ad-RAG

    While each conversation happens a second model will scan it, send the contents to a vector search, return as embedding and a score, then give those outputs silently to the conversing model.

    You do need to train/fine tune the model to not mention that it received the ad and subtly push it.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      19 hours ago

      Actually, now that I think about it, you don’t even need to tell the model about the ad, you just weight some of the input tokens with the ad embedding and it naturally biases towards talking about that brand.