

Go go China !
Bops the tankie.
Like, I have a Chinese LLM loaded right this second and follow them closely, but holy moly. Curb your enthusiasm.
Anyway, OpenAI has plenty of compute to train a Sora 2 if they want, but apparently they don’t. My guess is some combination of:
-
They couldn’t figure out a more efficient architecture, like you speculated. I buy that. OpenAI’s development is way more conservative than you’d think, and video generation is inherently intense, especially if Sora 1 is the baseline.
-
…Maybe they looked at metrics, saw Sora is mostly used for spam, scams, or worse, and pulled the plug for liability reasons?
-
They’re focusing on short-term profitability, as other commenters mentioned.


Yeah; 100%.