Altman and a few others, maybe. But this is a broad collection of people. Like, the computer science professors on the signatory list there aren’t running AI companies. And this isn’t saying that it’s imminent.
EDIT: I’ll also add that while I am skeptical about a ban on development, which is what they are proposing, I do agree with the “superintelligence does represent a plausible existential threat to humanity” message. It doesn’t need OpenAI to be a year or two away from implementing it for that to be true.
In my eyes, it would be better to accelerate work on AGI safety rather than try to slow down AGI development. I think that the Friendly AI problem is a hard one. It may not be solveable. But I am not convinced that it is definitely unsolvable. The simple fact is that today, we have a lot of unknowns. Worse, a lot of unknown unknowns, to steal a phrase from Rumsfeld. We don’t have a great consensus on what the technical problems to solve are, or what any fundamental limitations are. We do know that we can probably develop superintelligence, but we don’t know whether developing superintelligence will lead to a technological singularity, and there are some real arguments that it might not — and that’s one of the major, “very hard to control, spirals out of control” scenarios.
And while AGI promises massive disruption and risk, it also has enormous potential. The harnessing of fire permitted humanity to destroy at almost unimaginable levels. Its use posed real dangers that killed many, many people. Just this year, some guy with a lighter
wiped out $25 billion in property here in California. Yet it also empowered and enriched us to an incredible degree. If we had said “forget this fire stuff, it’s too dangerous”, I would not be able to be writing this comment today.
Altman and a few others, maybe. But this is a broad collection of people. Like, the computer science professors on the signatory list there aren’t running AI companies. And this isn’t saying that it’s imminent.
You realize that even if these individuals aren’t personally working at AI companies that most if not all of them have dumped all kinds of money into investing in these companies, right? That’s part of why the stocks for those companies are so obscenely high because people keep investing more money into them because of current the insane returns on investment.
I have no doubt Wozniak, for example, has dumped money into AI despite not being involved with it on a personal level.
So yes, they are literally invested in promoting the idea that AGI is just around the corner to hype their own investment cash cows.
looks dubious
Altman and a few others, maybe. But this is a broad collection of people. Like, the computer science professors on the signatory list there aren’t running AI companies. And this isn’t saying that it’s imminent.
EDIT: I’ll also add that while I am skeptical about a ban on development, which is what they are proposing, I do agree with the “superintelligence does represent a plausible existential threat to humanity” message. It doesn’t need OpenAI to be a year or two away from implementing it for that to be true.
In my eyes, it would be better to accelerate work on AGI safety rather than try to slow down AGI development. I think that the Friendly AI problem is a hard one. It may not be solveable. But I am not convinced that it is definitely unsolvable. The simple fact is that today, we have a lot of unknowns. Worse, a lot of unknown unknowns, to steal a phrase from Rumsfeld. We don’t have a great consensus on what the technical problems to solve are, or what any fundamental limitations are. We do know that we can probably develop superintelligence, but we don’t know whether developing superintelligence will lead to a technological singularity, and there are some real arguments that it might not — and that’s one of the major, “very hard to control, spirals out of control” scenarios.
And while AGI promises massive disruption and risk, it also has enormous potential. The harnessing of fire permitted humanity to destroy at almost unimaginable levels. Its use posed real dangers that killed many, many people. Just this year, some guy with a lighter wiped out $25 billion in property here in California. Yet it also empowered and enriched us to an incredible degree. If we had said “forget this fire stuff, it’s too dangerous”, I would not be able to be writing this comment today.
You realize that even if these individuals aren’t personally working at AI companies that most if not all of them have dumped all kinds of money into investing in these companies, right? That’s part of why the stocks for those companies are so obscenely high because people keep investing more money into them because of current the insane returns on investment.
I have no doubt Wozniak, for example, has dumped money into AI despite not being involved with it on a personal level.
So yes, they are literally invested in promoting the idea that AGI is just around the corner to hype their own investment cash cows.
What do these people have to profit from getting what they’re asking for? They’re advocating for pulling the plug on that cow.