• Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    5
    ·
    edit-2
    17 hours ago

    I feel like these people aren’t even really worried about superintelligence as much as hyping their stock portfolio that’s deeply invested in this charlatan ass AI shit.

    There’s some useful AI out there, sure, but superintelligence is not around the corner and pretending like it is acts just another way to hype the stock price of these companies who claim it is.

    • danzania@infosec.pub
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      15 hours ago

      On the contrary, many of the most notable signatories walked away from large paychecks in order to raise the alarm. I’d suggest looking into the history of individuals like Bengio, Hinton, etc. There are individuals hyping the bubble like Altman and Zuckerberg, but they did NOT sign this, casting further doubt on your claim.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        Geoffrey Hinton, retired Google employee and paid AI conference speaker, has nothing bad to say about Google or AI relationship therapy.

    • WanderingThoughts@europe.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 hours ago

      I’m more afraid of the AI propped stock market collapsing and sending us in a decade of financial ruin for the majority of people. Yeah, they’ll do bailouts but that won’t go to the bottom 80%. Most people will welcome an AGI for president at this stage.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      15 hours ago

      looks dubious

      Altman and a few others, maybe. But this is a broad collection of people. Like, the computer science professors on the signatory list there aren’t running AI companies. And this isn’t saying that it’s imminent.

      EDIT: I’ll also add that while I am skeptical about a ban on development, which is what they are proposing, I do agree with the “superintelligence does represent a plausible existential threat to humanity” message. It doesn’t need OpenAI to be a year or two away from implementing it for that to be true.

      In my eyes, it would be better to accelerate work on AGI safety rather than try to slow down AGI development. I think that the Friendly AI problem is a hard one. It may not be solveable. But I am not convinced that it is definitely unsolvable. The simple fact is that today, we have a lot of unknowns. Worse, a lot of unknown unknowns, to steal a phrase from Rumsfeld. We don’t have a great consensus on what the technical problems to solve are, or what any fundamental limitations are. We do know that we can probably develop superintelligence, but we don’t know whether developing superintelligence will lead to a technological singularity, and there are some real arguments that it might not — and that’s one of the major, “very hard to control, spirals out of control” scenarios.

      And while AGI promises massive disruption and risk, it also has enormous potential. The harnessing of fire permitted humanity to destroy at almost unimaginable levels. Its use posed real dangers that killed many, many people. Just this year, some guy with a lighter wiped out $25 billion in property here in California. Yet it also empowered and enriched us to an incredible degree. If we had said “forget this fire stuff, it’s too dangerous”, I would not be able to be writing this comment today.

      • Snot Flickerman@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        15 hours ago

        Altman and a few others, maybe. But this is a broad collection of people. Like, the computer science professors on the signatory list there aren’t running AI companies. And this isn’t saying that it’s imminent.

        You realize that even if these individuals aren’t personally working at AI companies that most if not all of them have dumped all kinds of money into investing in these companies, right? That’s part of why the stocks for those companies are so obscenely high because people keep investing more money into them because of current the insane returns on investment.

        I have no doubt Wozniak, for example, has dumped money into AI despite not being involved with it on a personal level.

        So yes, they are literally invested in promoting the idea that AGI is just around the corner to hype their own investment cash cows.

        • Perspectivist@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          15 hours ago

          What do these people have to profit from getting what they’re asking for? They’re advocating for pulling the plug on that cow.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      16 hours ago

      I doubt the few that are calling for a slowing or all out ban on further work on AI are trying to profit from any success they have. The funny thing is, we won’t know if we ever hit that point of even just AGI until we’re past it, and in theory AGI will quickly go to ASI simply because it’s the next step once the point is reached. So anyone saying AGI is here or almost here is just speculating, just as anyone who says it’s not near or won’t ever happen.

      The only thing possibly worse than getting to the AGI/ASI point unprepared might be not getting there, but creating tools that simulate a lot of its features and all of its dangers and ignorantly using them without any caution. Oh look , we’re there already, and doing a terrible job at being cautious, as we usually are with new tech.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        In my view, a true AGI would immediately be superintelligent because even if it wasn’t any smarter than us, it would still be able to process information at orders of magnitude faster rate. A scientist who has a minute to answer a question will always be outperformed by equally smart scientist who has a year.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          14 hours ago

          That’s a reasonable definition. It also pushes things closer to what we think we can do now, since the same logic makes a slower AGI equal to a person, and a cluster of them on a single issue better than one. The G (general) is the key part that changes things, no matter the speed, and we’re not there. LLMs are general in many ways, but lack the I to spark anything from it, they just simulate it by doing exactly what your point is, being much faster at finding the best matches in a response in data training and appearing sometimes to have reasoned it out.

          ASI is a definition only in scale. We as humans can’t have any idea what an ASI would be like other than far superior than a human for whatever reasons. If it’s only speed, that’s enough. It certain could become more than just faster though, and that added with speed… naysayers better hope they are right about the impossibilities, but how can they know for sure on something we wouldn’t be able to grasp if it existed?

    • monogram@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      16 hours ago

      Yes, this. AGI is a deflection tool against talking about income inequality.