

To illustrate what I mean more clearly, look at the top comments/replies for the NASA Artemis posts, as an example.
…It’s basically all conspiracy theorists, and government skeptics.
Twitter’s focusing the Artemis posts on them because it’s what they want to see, and most engaging for them.
In the EFF’s case, I’m not just talking about Musk’s influence. The algorithm will only show the EFF to users who would be highly engaged by it. E.g., angry skeptics who wouldn’t be swayed by the EFF anyway, or fans who already agree with the EFF. It’s literally not going to show the EFF to people who need to see it, as Twitter’s metrics would show it as unengaging.
This is the “false image” I keep trying to dispel. Twitter is less and less an “even spread” of exposure like people think it is, like it sort of used to be, more-and-more a hyper focused bubble of what you want to hear, and only what you want to hear. All the changes Musk is making are amplifying that. Maybe that’s fine for some orgs, but there’s no point in the EFF staying in that kind of environment, regardless of ethics.



On a technical level, that makes zero sense.
AI “agents” are basically just fancy prompts with a tool calling harness. They are infinitely replicable, at zero cost, with no intrinsic value; the cost comes from the generic CPU host, and the API calls to GPU servers, databases, or whatever else that are all centralized anyway.
Wanna hear a dirty secret?
“AI” cost is going to zero.
Model capabilities aren’t scaling, but inference efficiency is exploding, thanks to more resource-constrained labs and breakthroughs in papers. The endgame of the current bubble is mediocre but useful tools anyone can host themselves, dirt cheap. Maybe a bit more reliable and refined than what we have now, but about as “intelligent.”
And guess what?
Microsoft can’t profit off that. None of the Tech Bros can.
Point being, this exec is either delusional, or jawboning, so the world doesn’t realize that “AI” is a dumb utility/aid, and they can’t make any profit off it.