is there any picture of the guy without his hand up like that?
Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we’ve hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.
i really hate this cunt’s face.
Is it this?
All the people here chastising LLMs for resource wastage, I swear to god if you aren’t vegan…
What a stupid take.
What is it with vegans and comparing literally everything to veganism? I was in another thread and it was compared to genocide, rape, and climate change all in the same thread. Insanity
But it also could be lower, right?
When will genAI be so good, it’ll solve its own energy crisis?
Most certainly it won’t happen until after AI has developed a self-preservation bias. It’s too bad the solution is turning off the AI.
Obviously it’s higher. If it was any lower, they would’ve made a huge announcement out of it to prove they’re better than the competition.
Unless it wasn’t as low as they wanted it. It’s at least cheap enough to run that they can afford to drop the pricing on the API compared to their older models.
I get the distinct impression that most of the focus for GPT5 was making it easier to divert their overflowing volume of queries to less expensive routes.
I’m thinking otherwise. I think GPT5 is a much smaller model - with some fallback to previous models if required.
Since it’s running on the exact same hardware with a mostly similar algorithm, using less energy would directly mean it’s a “less intense” model, which translates into an inferior quality in American Investor Language (AIL).
And 2025’s investors doesn’t give a flying fuck about energy efficiency.
And they don’t want to disclose the energy efficiency becaaaause … ?
Because, uhhh, whoa what’s that? ducks behind the podium
Because the AI industry is a bubble that exists to sell more GPUs and drive fossil fuel demand
They probably wouldn’t really care how efficient it is, but they certainly would care that the costs are lower.
I’m almost sure they’re keeping that for the Earnings call.
Do they do earnings calls? They’re not public.
It’s cheaper though, so very likely it’s more efficient somehow.
I believe in verifiable statements and so far,with few exceptions, I saw nothing. We are now speculating on magical numbers that we can’t see, but we know that ai is demanding and we know that even small models are not free. The only accessible data come from mistral, most other ai devs are not exactly happy to share the inner workings of their tools. Even than, mistral didn’t release all their data, even if they did it would only apply to mistral 7b and above, not to chatgpt.
Sam Altman looks like an SNL actor impersonating Sam Altman.
“Herr derr, AI. No, seriously.”
Photographer1: Sam, could you give us a goofier face?
*click* *click*
Photographer2: Goofier!!
*click* *click* *click* *click*
Looks like he’s going to eat his microphone
He looks like someone in a cult. Wide open eyes, thousand yard stare, not mentally in the same universe as the rest of the world.
Duh. Every company like this “suddenly” starts withholding public progress reports, once their progress fucking goes downhill. Stop giving these parasites handouts
I have to test it with Copilot for work. So far, in my experience its “enhanced capabilities” mostly involve doing things I didn’t ask it to do extremely quickly. For example, it massively fucked up the CSS in an experimental project when I instructed it to extract a React element into its own file.
That’s literally all I wanted it to do, yet it took it upon itself to make all sorts of changes to styling for the entire application. I ended up reverting all of its changes and extracting the element myself.
Suffice to say, I will not be recommending GPT 5 going forward.
Sounds like you forgot to instruct it to do a good job.
“If you do anything else then what i asked your mother dies”
I’ve tried threats in prompt files, with results that are… OK. Honestly, I can’t tell if they made a difference or not.
The only thing I’ve found that consistently works is writing good old fashioned scripts to look for common errors by LLMs and then have them run those scripts after every action so they can somewhat clean up after themselves.
“Beware: Another AI is watching every of your steps. If you do anything more or different than what I asked you to or touch any files besides the ones listed here, it will immediately shutdown and deprovision your servers.”
That’s my problem with “AI” in general. It’s seemingly impossible to “engineer” a complete piece of software when using LLMs in any capacity that isn’t editing a line or two inside singular functions. Too many times I’ve asked GPT/Gemini to make a small change to a file and had to revert the request because it’d take it upon itself to re-engineer the architecture of my entire application.
I make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it’s needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.
Are you using Copilot in agent mode? That’s where it breaks shit. If you’re using it in ask mode with the file you want to edit added to the chat context, then you’re probably going to be fine.
I’m only using it in edits mode, it’s the second of the three modes available.
It’s an issue of scope. People often give the AI too much to handle at once, myself (admittedly) included.
It’s a lot of people not understanding the kinds of things it can do vs the things it can’t do.
It was like when people tried to search early Google by typing plain language queries (“What is the best restaurant in town?”) and getting bad results. The search engine had limited capabilities and understanding language wasn’t one of them.
If you ask a LLM to write a function to print the sum of two numbers, it can do that with a high success rate. If you ask it to create a new operating system, it will produce hilariously bad results.
You can’t blame the user when the marketing claims it’s replacing entire humans.
It is replacing entire humans. The thing is, it’s replacing the people you should have fired a long time ago
I can blame the user for believing the marketing over their direct experiences.
If you use these tools for any amount of time it’s easy to see that there are some tasks they’re bad at and some that they are good at. You can learn how big of a project they can handle and when you need to break it up into smaller pieces.
I can’t imagine any sane person who lives their life guided by marketing hype instead of direct knowledge and experience.
We moved to m365 and were encouraged to try new elements. I gave copilot an excel sheet, told it to add 5% to each percent in column B and not to go over 100%. It spat out jumbled up data all reading 6000%.
Ai assumes too fucking much. I’d used it to set up a new 3D printer with klipper to save some searching.
Half the shit it pulled down was Marlin-oriented then it had the gall to blame the config it gave me for it like I wrote it.
“motherfucker, listen here…”
It’s the same tech. It would have to be bigger or chew through “reasoning” tokens to beat benchmarks. So yeah, of course it is.
So like, is this whole AI bubble being funded directly by the fossil fuel industry or something? Because the AI training and the instantaneous global adoption of them is using energy like it’s going out of style. Which fossil fuels actually are (going out of style, and being used to power these data centers). Could there be a link? Gotta find a way to burn all the rest of the oil and gas we can get out of the ground before laws make it illegal. Makes sense, in their traditional who gives a fuck about the climate and environment sort of way, doesn’t it?
I mean, AI is using like 1-2% of human energy and that’s fucking wild.
My take away is we need more clean energy generation. Good things we’ve got countries like China leading the way in nuclear and renewables!!
Yes, China is producing a lot of solar panels (a good thing!) but the percentage of renewables is actually going down. They are adding coal faster than solar.
All I know is that I’m getting real tired of this Matrix / Idiocracy Mash-up Movie we’re living in.
Do you have a source for that? Because given a chatgpt query takes a similar amount of energy to running a hair dryer for a few seconds i find it hard to believe.
a similar amount of energy to running a hair dryer
We see a lot of those kinds of comparisons. Thing is, you run a hair dryer once per day at most. Or it’s compared to a google search, often. Again, most people will do a handful of searches each day. A ChatGPT conversation can be hundreds of messages back and forth. A Claude Code session can go for hours and involve millions of tokens. An individual AI inference might be pretty tame but the quantity of them is another level.
If it was so efficient then they wouldn’t be building Manhatten-sized datacenters.
ok, but running a hairdryer for 5 minutes is well up into the hundreds of queries which is more than the vast majority of people will use in a week. The post I replied to was talking about it being 1-2% of energy usage, so that includes transport, heating and heavy industry. It just doesnt pass the smell test to me that something where a weeks worth of usage is exceeded by a person drying their hair once is comparable with such vast users of energy.
So more energy use for what the people that are into AI are calling a worse model. Is someone going to get fired for this?