

Partially configured some parts via LLM but please don’t crucify me for that.
Slap in a spare GPU, and self-host one!
The 30B-class models are unbelievably good now, for being so small. They’re kinda where Claude was like a year ago, if not less. And (with the right backend) they aren’t expensive to host.


I would bet my shoes Facebook or someone lobbied for this.
It’s easy to blame Mormons, but I think that bloc was more of a mark.