It won’t be long until the LLMs are trained with for ad-RAG
While each conversation happens a second model will scan it, send the contents to a vector search, return as embedding and a score, then give those outputs silently to the conversing model.
You do need to train/fine tune the model to not mention that it received the ad and subtly push it.
Actually, now that I think about it, you don’t even need to tell the model about the ad, you just weight some of the input tokens with the ad embedding and it naturally biases towards talking about that brand.
It won’t be long until the LLMs are trained with for ad-RAG
While each conversation happens a second model will scan it, send the contents to a vector search, return as embedding and a score, then give those outputs silently to the conversing model.
You do need to train/fine tune the model to not mention that it received the ad and subtly push it.
Actually, now that I think about it, you don’t even need to tell the model about the ad, you just weight some of the input tokens with the ad embedding and it naturally biases towards talking about that brand.