

The other thing that he doesn’t understand (and most “AI” advocates don’t either) is that LLMs have nothing to do with facts or information. They’re just probabilistic models that pick the next word(s) based on context.
That’s a massive oversimplification, it’s like saying humans don’t remember things, we just have neurons that fire based on context
LLMs do actually “know” things. They work based on tokens and weights, which are the nodes and edges of a high dimensional graph. The llm traverses this graph as it processes inputs and generates new tokens
You can do brain surgery on an llm and change what it knows, we have a very good understanding of how this works. You can change a single link and the model will believe the Eiffel tower is in Rome, and it’ll describe how you have a great view of the colosseum from the top
The problem is that it’s very complicated and complex, researchers are currently developing new math to let us do this in a useful way
No, that’s the rationalization. They admin wants to use palantir to collect data on American citizens, including giving ICE mobile camera installations to do face tracking. They want to do a 1984, and they don’t want any speed bumps from the states
Also, AI companies gave Trump money and there’s talk of restricting their power/water access to data centers to prioritize the people living nearby