• danhab99@programming.dev
    link
    fedilink
    English
    arrow-up
    34
    ·
    14 hours ago

    I feel like literally everybody knew it was a bubble when it started expanding and everyone just kept pumping into it.

    How many tech bubbles do we have to go through before we leave our lesson?

    • belit_deg@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 hours ago

      I get that people who sell AI-services wants to promote it. That part is obvious.

      What I don’t get is how gullible the rest of society at large is. Take the norwegian digitalization minister, who says that 80% of the public sector shall use AI. Whatever that means.

      Or building a gigantic fuckoff openai data centre, instead of new industry https://openai.com/nb-NO/index/introducing-stargate-norway/

      Jared Diamond had a great take on this in “Collapse”. That there a countless examples of societies making awful decisions - because the decisionmakers are insulated from the consequences. On the contrary, they get short term gains.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      the ceos, C-SUITES and some people trying to get into CS field are the one that believe in it. i know a person who already has a degree, and sitll think its wise to pursue a GRAD degree in the field adjacent or directly with AI or close to it.

    • sibachian@lemmy.ml
      link
      fedilink
      English
      arrow-up
      33
      ·
      13 hours ago

      what lesson? it’s a ponzi scheme and whoever is the last holding the bag is the only one losing.

      • 123@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 hours ago

        Plus everyone else that pays taxes as they will have to continue to pay for unemployment insurance, food stamps, rent assistance, etc (not the CEOs and execs that caused it that’s for sure).

  • yarr@feddit.nl
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    18 hours ago

    Everyone knows a bubble is a firm foundation to build upon. Now that Trump is back in office and all our American factories are busy cranking out domestic products I can finally be excited about the future again!

    I predict that in a year this bubble will be at least twice as big!

  • Tattorack@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    ·
    edit-2
    1 day ago

    SSSSIIIIIIIGGGGGGHHHHHHHHHHH…

    Looks like I’ll have to prepare for yet another once-in-a-lifetime economic collapse.

  • Xulai@mander.xyz
    link
    fedilink
    English
    arrow-up
    107
    arrow-down
    1
    ·
    1 day ago

    As someone who works with integrating AI- it’s failing badly.

    At best, it’s good for transcription- at least until it hallucinates and adds things to your medical record that don’t exist. Which it does and when the providers don’t check for errors - which few do regularly- congrats- you now have a medical record of whatever it hallucinated today.

    And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.

    They can’t consistently do anything more complex without making errors- and most people are frankly too dumb or lazy to properly verify outputs. And that’s why this bubble is so huge.

    It is going to pop, messily.

    • rhombus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      20
      ·
      17 hours ago

      And they are no better than answering machines for customer service. Sure, they can answer basic questions, but so can the automated phone systems.

      This is what drives nuts the most about it. We had so many incredibly efficient, purpose-built tools using the same technologies (machine learning and neural networks) and we threw them away in favor of wildly inefficient, general-purpose LLMs that can’t do a single thing right. All because of marketing hype convincing billionaires they won’t need to pay people anymore.

    • hansolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      22
      ·
      22 hours ago

      This 1 million%.

      The fact that coding is a big corner of the use cases means that the tech sector is essentially high on their own supply.

      Summarizing and aggregating data alone isn’t a substitute for the smoke and mirrors of confidence that is a consulting firm. It just makes the ones that can lean on branding able to charge more hours for the same output, and add “integrating AI” another bucket of vomit to fling.

    • OctopusNemeses@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      21 hours ago

      I tried having it identify an unknown integrated circuit. It hallucinated a chip. It kept giving me non-existent datasheets and 404 links to digikey/mouser/etc.

    • Laser@feddit.org
      link
      fedilink
      English
      arrow-up
      44
      ·
      1 day ago

      and most people are frankly too dumb or lazy to properly verify outputs.

      This is my main argument. I need to check the output for correctness anyways. Might as well do it in the first place then.

      • GhostTheToast@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Honestly I mostly use it as a jumping off point for my code or to help me sound more coherent when writing emails.

      • mrvictory1@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        12 hours ago

        This is exactly why I love duckduckgo’s AI results built in to search. It appears when it is relevant (and yes you can nuke it from orbit so it never ever appears) and it always gives citations (2 websites) so I can go check if it is right or not. Sometimes it works wonders when regular search results are not relevant. Sometimes it fails hard. I can distinguish one from the other because I can always check the sources.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 hours ago

      Well, from this description it’s still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.

      BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an “AI” for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).

      • pinball_wizard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).

        But how does this work help next quarter’s profits?

        • vacuumflower@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 minutes ago

          If each unplanned death not result of operator’s mistake would lead to confiscation of one month’s profit (not margin), then I’d think it would help very much.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      edit-2
      19 hours ago

      As someone who is actually an AI tool developer (I just use existing models) - it’s absolutely NOT failing.

      Lemmy is ironically incredibly tech illiterate.

      It can be working and good and still be a bubble - you know that right? A lot of AI is overvalued but to say it’s “failing badly” is absurd and really helps absolutely no one.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        Lemmy is ironically incredibly tech illiterate

        I disagree with all these self hosting Linux running passionate open source advocates, so they must be technology illiterate.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          4 hours ago

          According to whom? No one’s running their instance here. I’m a software dev with over 20 years of foss experience and imo lemmy’s user base is somewhat illiterate bunch of contrarians when it comes to popular tech discussions.

          We’re clearly not going to agree here without objective data so unless you’re willing to provide that have a good day, bye.

      • Dogiedog64@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 hours ago

        Yup. If you have money you can AFFORD TO BURN, go ahead and short to your heart’s content. Otherwise, stay clear and hedge your bets.

    • whyrat@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      17 hours ago

      The question is when, not if. But it’s an expensive question to guess the “when” wrong. I believe the famous idiom is: the market can stay irrational longer than you can stay solvent.

      Best of luck!

  • belit_deg@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    1 day ago

    If I was China, I would be thrilled to hear that the west are building data centres for LLMs, sucking power from the grid, and using all their attention and money on AI, rather than building better universities and industry. Just sit back and enjoy, while I can get ahead in these areas.

    • disco@lemdro.id
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      edit-2
      23 hours ago

      They’ve been ahead for the past 2 decades. Government is robbing us blind because it only serves multinational corporations or foreign governments. It does not serve the people.

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        21 hours ago

        They have a demographic pit in front of them which they themselves created with “1 child policy”.

        Also CCP too doesn’t exactly serve the people. It’s a hierarchy of (possibly benevolent) bureaucrats.

        • disco@lemdro.id
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          14 hours ago

          I never said they were ahead on social issues. They aren’t and have never been. Their infrastructure shits on ours. Hell look at their healthcare system.

  • Vinstaal0@feddit.nl
    link
    fedilink
    English
    arrow-up
    10
    ·
    22 hours ago

    Not only the tech bubble is doing that.

    It’s also the tech bubble ow and the pyramide scheme of the US housing sector will cause more financial issues as well and so is the whole creditcard system

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    13
    ·
    edit-2
    19 hours ago

    Willing to take real life money bet that bubble is not going to pop despite Lemmy’s obsession here. The value is absolutely inflated but it’s definitely real value and LLMs are not going to disappear unless we create a better AI technology.

    In general we’re way past the point of tech bubbles popping. Software markets move incredibly fast and are incredibly resilient to this. There literally hasn’t been a software bubble popping since dotcom boom. Prove me wrong.

    Even if you see problems with LLMs and AI in general this hopeful doomerism is really not helping anyone. Now instead of spending effort on improving things people are these angry, passive, delusional accelerationists without any self awareness.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 hours ago

      The value a thing creates is only part of whether the investment into it is worth it.

      It’s entirely possible that all of the money that is going into the AI bubble will create value that will ultimately benefit someone else, and that those who initially invested in it will have nothing to show for it.

      In the late 90’s, U.S. regulatory reform around telecom prepared everyone for an explosion of investment in hard infrastructure assets around telecommunications: cell phones were starting to become a thing, consumer internet held a ton of promise. So telecom companies started digging trenches and laying fiber, at enormous expense to themselves. Most ended up in bankruptcy, and the actual assets eventually became owned by those who later bought those assets for pennies on the dollar, in bankruptcy auctions.

      Some companies owned fiber routes that they didn’t even bother using, and in the early 2000’s there was a shitload of dark fiber scattered throughout the United States. Eventually the bandwidth needs of near universal broadband gave that old fiber some use. But the companies that built it had already collapsed.

      If today’s AI companies can’t actually turn a profit, they’re going to be forced to sell off their expensive data at some point. Maybe someone else can make money with it. But the life cycle of this tech is much shorter than the telecom infrastructure I was describing earlier, so a stale LLM might very well become worthless within years. Or it’s only a stepping stone towards a distilled model that costs a fraction to run.

      So as an investment case, I’m not seeing a compelling case for investing in AI today. Even if you agree that it will provide value, it doesn’t make sense to invest $10 to get $1 of value.

      • Tollana1234567@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        dint microsoft already admitted thier AI isnt profitable, i suspect thats why they have been laying off in waves. they are hoping govt contracts will stem the bleeding or hold them off, and they found the sucker who will just do it, trump. I wonder if palintir is suffeing too, surely thier AI isnt as useful to the military as they claim.

    • SwingingTheLamp@midwest.social
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      18 hours ago

      I get the thinking here, but past bubbles (dot com, housing) were also based on things that have real value, and the bubble still popped. A bubble, definitionally, is when something is priced far above its value, and the “pop” is when prices quickly fall. It’s the fall that hurts; the asset/technology doesn’t lose its underlying value.

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      14 hours ago

      LLMs can absolutely disappear as a mass market technology. They will always exist in some sense as long as there are computers to run them and people who care to try, but the way our economy has incorporated them is completely unsustainable. No business model has emerged that can support them, and at this point, I’m willing to say that there is no such business model without orders of magnitude gains in efficiency that may not ever happen with LLMs.

    • WhirlpoolBrewer@lemmings.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      18 hours ago

      In a capitalist society, what is good or best is irrelevant. All that matters is if it makes money. AI makes no money. The $200 and $300/month plans put in rate limits because at those prices they’re losing too much money. Lets say the beak-even cost for a single request is somewhere between $1-$5 depending on the request just for the electricity, and people can barely afford food, housing, and transportation as it is. What is the business model for these LLMs going to be? A person could get a coffee today, or send a single request to an LLM? Now start thinking that they’ll need newer gpus next year. And the year after that. And after that. And the data center will need maintenance. They’re paying literally millions of dollars to individual programmers.

      Maybe there is a niche market for mega corporations like Google who can afford to spend thousands of dollars a day on LLMs, but most companies won’t be able to afford these tools. Then there is the problem where if the company can afford these tools, do they even need them?

      The only business model that makes sense to me is the one like BMW uses for their car seat warmers. BMW requires you to pay a monthly subscription to use the seat warmers in their cars. LLM makers could charge a monthly subscription to run a micro model on your own device. That free assistant in your Google phone would then be pay walled. That way businesses don’t need to carry the cost of the electricity, but the LLM is going to be fairly low functioning compared to what we get for free today. But the business model could work. As long as people don’t install a free version.

      I don’t buy the idea that “LLMs are good so they are going to be a success”. Not as long as investors want to make money on their investments.

      • bridgeenjoyer@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        12 hours ago

        I imagine a dystopia where the main internet has been destroyed and watered down so you can only access it through a giant corpo llm (isps will become llmsps) So you choose between watching an ai generated movie for entertainment or a coffee. Because they will destroy the internet any way they can.

        Also they’ll charge more for prompts related to things you like. Its all there for the plundering, and consumers want it.

      • lacaio da inquisição@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        I believe that if something has enough value, people are willing to pay for it. And by people here I mean primarily executives. The problem is that AI has not enough value to sustain the hype.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        10
        ·
        14 hours ago

        people can barely afford food, housing, and transporation as it is.

        Citation needed. The doomerism in this thread is so cringe.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      19 hours ago

      I mean we haven’t figured out how to make AI profitable yet, and though it’s a cool technology with real world use cases, nobody has proven yet that the juice is worth the squeeze. There’s an unimaginable amount of money tied up in a technology on the hope that one day they find a way to make it profitable and though AI as a technology “improves”, its journey towards providing more value than it costs to run is not.

      If I roleplayed as somebody who desperately wanted AI to succeed, my first question would be “What is the plan to have AI make money?” And so far nobody, not even the technology’s biggest sycophants have an answer.

        • Encrypt-Keeper@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          16 hours ago

          AI as a technology is so far not profitable for anybody. The hardware AI runs on is profitable, as might be some start ups that are heavily leveraging AI, but actually operating AI is so far not profitable, and because increasingly smaller improvements in AI use exponentially more power, there’s no real path that is visible to any of us today that suggests anyone’s yet found a path to profitability. Aside from some kind of miracle out of left field that no one today has even conceived, the long term outlook isn’t great.

          If AI as a technology busts, so does the insane profits behind the hardware it runs on. And without that left field technological breakthrough, the only option to pursue to avoid AI going completely bust is to raise prices astronomically, which would bust any companies currently dependent on all the AI currently being provided to them for basically next to nothing.

          The entire industry is operating at a loss, but is being propped up by the currently abstract idea that AI will some day make money. This isn’t the “AI Hater” viewpoint, it’s just the spot AI is currently in. If you think AI is here to stay, you’re placing a bet on a promise that nobody as of today can actually make.

            • Encrypt-Keeper@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              13 hours ago

              Delusion? Ok let’s get it straight from the horse’s mouth then. I’ve asked ChatGPT if OpenAI is profitable, and to explain its financial outlook. What you see below, emphasis and emojis, are generated by ChatGPT:

              —ChatGPT—

              OpenAI is not currently profitable. Despite its rapid growth, the company continues to operate at a substantial loss.

              📊 Financial Snapshot

              • Annual recurring revenue (ARR) was reported at approximately $12 billion as of July 2025, implying around $1 billion per month in revenue.

              • Projected total revenue for 2025 is $12.7 billion, up from roughly $3.7 billion in 2024.

              • However, OpenAI’s cash burn has increased, with projected operational losses around $8 billion in 2025 alone

              —end ChatGPT—

              The most favorable projections are that OpenAI will not be cash positive (That means making a single dollar in profit) until it reached 129 billion dollars in revenue. That means that OPENAI has to make more than 10X their annual revenue to finally be profitable. And their current strategy to make more money is to expand their infrastructure to take on more customers and run more powerful systems. The problem is, the models require substantially more power to make moderate gains in accurate and capability. And every new AI datacenter means more land cost, engineers, water, and electricity. Compounding the issue is that the more electricity they use, the more it costs. NJ has paved the way for a number of new huge AI datacenters in the past few years and the cost of electricity in the state has skyrocketed. People are seeing their monthly electric bills raised by 50-150% in the last couple months alone. Thats forcing not only people out of their homes, but eats substantially into revenue growth for data centers. It’s quite literally a race for AI companies to reach profitability before hitting the natural limits to the resources they require to expand. And I haven’t heard a peep about how they expect to do so.

              • Dr. Moose@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                6 hours ago

                You use one company thats is spearheading the entire industry as your example that no AI company is profitable. Either you are argueing in extremely bad faith or you’re invredibly stupid I’m sorry.

                • Encrypt-Keeper@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  4 hours ago

                  Of course I used the company that is the market leader in AI as an example that AI companies are not profitable you donut, that’s how that works.

                  They’re not the only AI company that’s not profitable, like I said none of them are. You can take your pick if you don’t like OpenAI as an example.

        • Frezik@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 hours ago

          Who is it profitable for right now? The only ones I see are the ones selling shovels in a gold rush, like Nvidia.

          • Dr. Moose@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            6 hours ago

            Every AI software company? So much ignorance in this thread its almost impossible to respond to. Llm queries are super cheap already and very much profitable.

    • chobeat@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      19 hours ago

      there’s an argument that this is just the targeted ads bubble that keeps inflating using different technologies. That’s where the money is coming from. It’s a game of smoke and mirrors, but this time it seems like they are betting big on a single technology for a longer time, which is different from what we have seen in the past 10 years.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      13 hours ago

      Sort of agreed. I disagree with the people around here acting like AI will crash and burn, never to be seen again. It’s here to stay.

      I do think this is a bubble and will pop hard. Too many players in the game, most are going to lose, but the survivors will be rich and powerful beyond imagining.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 day ago

    Open models are going to kick the stool out. Hopefully.

    GLM 4.5 is already #2 on lm arena, above Grok and ChatGPT, and runnable on homelab rigs, yet just 32B active (which is mad). Extrapolate that a bit, and it’s just a race to the zero-cost bottom. None of this is sustainable.

    • dubyakay@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      I did not understand half of what you’ve written. But what do I need to get this running on my home PC?

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        18 hours ago

        I am referencing this: https://z.ai/blog/glm-4.5

        The full GLM? Basically a 3090 or 4090 and a budget EPYC CPU. Or maybe 2 GPUs on a threadripper system.

        GLM Air? Now this would work on a 16GB+ VRAM desktop, just slap in 96GB+ (maybe 64GB?) of fast RAM. Or the recent Framework desktop, or any mini PC/laptop with the 128GB Ryzen 395 config, or a 128GB+ Mac.

        You’d download the weights, quantize yourself if needed, and run them in ik_llama.cpp (which should get support imminently).

        https://github.com/ikawrakow/ik_llama.cpp/

        But these are…not lightweight models. If you don’t want a homelab, there are better ones that will fit on more typical hardware configs.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          20 hours ago

          It’s going to be slow as molasses on ollama. It needs a better runtime, and GLM 4.5 probably isn’t supported at this moment anyway.

            • WorldsDumbestMan@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              18 hours ago

              Qwen3 8B sorry, Idiot spelling. I use it to talk about problems when I have no internet or maxed out on Claude. I can rarely trust it with anything reasoning related, it’s faster and easier to do most things myself.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                18 hours ago

                Yeah, 7B models are just not quite there.

                There are tons of places to get free access to bigger models. I’d suggest Jamba, Kimi, Deepseek Chat, and Google AI Studio, and the new GLM chat app: https://chat.z.ai/

                And depending on your hardware, you can probably run better MoEs at the speed of 8Bs. Qwen3 30B is so much smarter its not even funny, and faster on CPU.

  • Doomsider@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    Ooowee, they are setting up the US for a major bust aren’t they. I guess all the wealthy people will just have to buy up everything when it becomes dirt cheap. Sucks to have to own everything I guess.

  • sbv@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 day ago

    Recognizing from history the possibilities of where this all might lead, the prospect of any serious economic downturn being met with a widespread push of mass automation—paired with a regime overwhelmingly friendly to the tech and business class, and executing a campaign of oppression and prosecution of precarious manual and skilled laborers—well, it should make us all sit up and pay attention.

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Your kids will enjoy their new Zombie Twitter AI teacher with fabulous lesson plans like, “Was the Holocaust real or just a hoax?”