

During setup, Atlas pushes very aggressively for you to turn on “memories” (where it tracks and stores everything you do and uses it to train an AI model about you)
I wonder, do memories really train a model about the user? Or are they just shoved in the context window strategically? Possibly selected by a small performant model in the background based on relevance to the current context window?
Training millions of mini models on people would be really interesting, and I don’t think I’ve noticed anything saying that is happening, yet. Even tho it seems like a logical idea.



Precision, nuance, and up to the moment contextual understanding are all missing from the “intelligence.”