Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    72
    ·
    5 days ago

    If the banks don’t see the value in it, it’s only a matter of time

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    4 days ago

    They fired people for AI, now they fire them without AI. Please tell me how they plan on sustaining an economy where only the 1% has discretionary income?

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      But Oracle was building those data centers for OpenAI. OpenAI is going to be used by the Pentagon. Bailing Oracle out is now a matter of National Security!! If this has to come off of the taxes paid by the people they just laid off, that’s unfortunate but… have I mentioned National Security?

  • Hanrahan@slrpnk.net
    link
    fedilink
    English
    arrow-up
    35
    ·
    5 days ago

    I was listening to a finance YT vid last night and the dude said if it wasn’t for the enormous AI spend, the US would be deep in a technical recession now.

    obviously the fault of immigrants and those on food stamps though /s

  • Mwa@thelemmy.club
    link
    fedilink
    English
    arrow-up
    23
    ·
    4 days ago

    this may be one of the early signs of a burst(besides the economy falling due to that one war i think?)

    • rmrf@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      I saw this and sound “holy shit” aloud. If this sketchy source is legit, this is probably pretty big. The stock market has been wobbly the last few days.

      • Mwa@thelemmy.club
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        True.
        The stock market (for tech companies) has also been falling before the war.

    • 7101334@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      edit-2
      4 days ago

      It’s gonna suck for the working class WAAAAAAAAAAAAAAAAY more than the people who will lose their fortunes as a result of the bubble popping

      sorry

      it always does

      Michael Saylor, one of the biggest owners of one of the other “doesnt actually do anything” bubbles - Bitcoin - is a great example. He made a fortune during the dot com bubble.

      With that said, if I have to eat hard tack and canned beans and use leftover charcoal from the park BBQ grills instead of toothpaste in order to never have another AI bullshit feature shoehorned into my existence, it might be worth it

  • Kazumara@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    70
    ·
    5 days ago

    I think this is two stories being mixed a bit here?

    A bit over one month old, the lending issues: https://www.cio.com/article/4125103/oracle-may-slash-up-to-30000-jobs-to-fund-ai-data-center-expansion-as-us-banks-retreat.html

    Now, confirmation of the cuts that were suspected since last year from Bloomberg: https://www.reuters.com/business/oracle-plans-thousands-job-cuts-data-center-costs-rise-bloomberg-news-reports-2026-03-05/

    I’m not saying there is not a link between the events, but somehow the posted article does a weird rehash mixed with news, and is not even dated, and I don’t like that, so I’m sharing the individual news pieces as separate links for other’s benefit.

  • InvalidName2@lemmy.zip
    link
    fedilink
    English
    arrow-up
    52
    ·
    5 days ago

    Sucks to be in tech right now. I’m sure there are still pockets of good employers with happy, confident worker bees, but those are few and far between as best I can tell.

    Pretty much everybody I know and speak with regularly who is working in the tech industry or a tech role in general is feeling the strain.

    Layoffs. Remaining employees have to pick up the additional workload of people who were laid off. Threats of future layoffs. Hiring freezes. Bonuses slashed or cut entirely. Little or no raises, not even cost of living increases. Demotions, in some cases. Expected to use LLMs to do things that LLMs have no business doing because management is clueless on the topic and expects everybody who is “good with computer” to be an AI expert. And the list goes on.

    And then as already mentioned elsewhere, there are almost no true entry-level positions opening up, so new grads are really struggling to get established in the industry. It’s particularly sad because this is so short-sighted and the negative impacts have the potential to be quite severe.

    • ChickenLadyLovesLife@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      5 days ago

      I was laid off in 2019 by a large west coast tech giant as part of a mass layoff. We had the option of trying to find a new internal job but every job posting involved AI (seven years ago!) and nobody that I knew even got a reply from any application. Now I’m a school bus driver and 100X happier even though I make like 1/5 of what I used to make. The plot twist is that AI is probably going to replace school bus drivers sooner or later, flattened children be damned.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 days ago

        They could, but you’ll likely be the last one to go. Those kids will likely kill each other without supervision, and the third time they have to drive little Jimmy to school because the bus AI didn’t wait 30 seconds, they’ll be so far up the administration’s ass. They’ll know what they had for breakfast.

      • Tollana1234567@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        my older bro as laid off as part of the first wave in '23 heard thier company got bought out and he hasnt found any job yet, he might be doing “investment or stocks though”.

    • jkercher@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      ·
      5 days ago

      Easy win for companies that didn’t buy into the hype. I’m the only dedicated software dev at my company, so there was no middle manager to foolishly think a chat bot could do my job. We are a small company that can compete with big players, and those big players appear to be floundering. Now, we are expanding.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      we already seeing the effects of fresh graduates from college, and those that are still in. i wonder if any more reports of universities having low enrollments is going to be too big to ignore.

    • HubertManne@piefed.social
      cake
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      This is worse than 2008 and I remember back then I was let go and the other guy was not and we sorta debated which would be worse off. This is way worse though. I would say at least twice as bad at this point. Funny thing was no one realized the trouble we were in in 2008 it was really like 2010 by the time it was really felt. On hindsite they are going to be talking about the collapse in 2025.

      • 7101334@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I’m not an economist, so I don’t know shit about fuck (though most economists don’t either tbf), but some people are comparing this to the railway bubble. Shit’s (potentially) so bad that they don’t even have a comparison from within our lifetimes to point to.

  • GamingChairModel@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 days ago

    This particular source seems sketchy, but the broader context supports the core of this story.

    There was a report in January from TD Cowen that Oracle needed to free up cash as banks tightened up lending for data center deals, and that certain projects were on hold and in jeopardy of being canceled. That same report projected that Oracle might lay off 20,000 to 30,000 workers.

    Then, just this last Friday, Bloomber reported that Oracle and OpenAI canceled their plans to expand their flagship data center in Texas as part of their $500 billion “Stargate” initiative. Here’s the Reuters article describing it at a high level, because the original report is paywalled.

    So everyone is looking back at that January report and seeing the recent data center news as confirmation that Oracle wants to free up cash by laying off staff.

          • Jankatarch@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            5 days ago

            The people that were laid off.

            Investors are already suing Oracle right now btw, of course the legal system wouldn’t have let them down.

            • maplesaga@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              Ah I see, that would be an interesting law. Maybe it will happen one day, and we will take it for granted like we do 8 hour work days.

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        It’s not unheard of, in certain cases in certain more civilised states it does happen.

        The state should be able to sue as layoffs put strain on the social system.

      • biofaust@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        This is a very good point. I never had the discussion whether real AI, not transformer-based chatbots, would be a boon or a bane for human workers. I mean, we should already have data about it.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          By ‘real AI’, I presume you mean AGI, a digital intelligence that is actually superior to human intelligence, ie, is more intelligent than the smartest human and has all our collective knowledge and is able to comprehend it and evaluate it more consistently than any of us… and also is thus capable of improving itself and becoming more and more superintelligent.

          That is still scifi, that is not real.

          What we currently call ‘AI’ is basically an extremely expensive, lackluster pantomime of that, that fools fools into thinking it is the other thing… mostly because it is sycophantic and very confident, ie, it uses well known ‘hacks’ in human psychology, where confidence, breadth of knowledge, usage of technical terms… you know, con man techniques … are confused for actual competence.

          If we had a real AGI, it would be be capable of both hacking into all the military information systems of the world, and tricking humans into nuking each other… and it would also be capable of making actual novel improvements in software, hardware, engineering, physics, social engineering, etc, and could decide to be a kind of benevolent dictator of the entire economy, that it would command and control.

          We have no capacity to model the morality that would emerge in an actual superintelligence, because we definitionally would not be able to keep up with attempting to understand how it thinks.

          Thats where the whole ‘is AI the potential best thing ever or would it become SkyNet’ problem comes from.

          … But we are not there yet.


          We are at… basically, a very fancy autocomplete algorithm that can analyze huge datasets reasonably well, compared to an average human, but also makes all kinds of mistakes, hallucinates ‘facts’ in order to generate more coherent things to say, and these hallucinations routinely trick non subject matter expert humans into just going along with it, again, like a con artist, like a fast talking ‘influencer’ pitching selling you a course or giving you some kind of ‘advice’.

          And currently, what is going on, is that we are pouring I think at this point trillions of dollars into ‘AI’, under the premise that it is AGI, that it will be capable of generating massive returns on investment and productivity increases…

          … but the actual results are turning out to be, all averaged for everywhere it has been implemented… somewhere between a net productivity loss, to meagre productivity gains.

          What that means is that the AI Mania is the biggest bubble, the most severe malinvestment of economic resources in the history of humanity.

          When that pops, we basically formally transition into cyberpunk dystopia, technofeudalism.


          AI is a tool, a device, a machine. Thus, it depends on how you use it, what you use it for.

          Right now, we have a whole lot of companies saying they are laying off workers because we don’t need them anymore… this is broadly a lie.

          People are being laid off because the economy, the real economy, is already contracting, basically due to the collapse of the US as the undisputed world hegemon.

          AI, as a broad, socioeconomic force… is mostly a smokescreen, the ultimate promise of bread and circuses, that masks a gigantic wealth transfer and restructing of economic and political power.

          AI as a tool can be used for good, in specific use cases.

          But it broadly isn’t, because people are fooled by the conversation machine into thinking it can do things that there is no evidence it can do, because people do not understand its limitations and flaws, and then they plug it into their immensely shitty business processes, and just assume it will not break things when it tries to use them.

          AI, as it currently exists, is essentially a false or trickster God of Capitalism.

          • biofaust@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            No, by real AI I meant what we already had before and that now is “disregarded” as machine learning algorithms.

            I know it is not the correct technical definition, but AI is mostly a marketing term anyway.

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    80
    ·
    edit-2
    6 days ago

    The bubble popping seems inevitable at this point. Before the Giants were funding this by their core business plus loans backed by their core business. Now they’ve stretched their credit so much that no one’s giving them loans anymore and instead of cutting back on the building spree they’re making cuts to their core business.

    They’re betting that their customers are so locked in that they won’t leave despite degradation in service. How deep oracle, AWS, googles hooks are in people remain to be seen, people seem to tolerate a lot of enshitification, but there’s gotta be a tipping point. Once they reach that and the core business crashes all the rest of the dominos will fall.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      31
      ·
      5 days ago

      that is why they are trying to peddle this to governments in EU, USA so heavily, they know they will take on AI at face value, instead of testing the efficacy of using AI.

      • M0oP0o@mander.xyz
        link
        fedilink
        English
        arrow-up
        20
        ·
        5 days ago

        Great timing then, just as the states becomes a global pariah making every one else on earth have to reevaluate any business done with american based firms. Nations are worried about massive instability and war, no one has the appetite to gamble big on unproven tech dreams.

    • Munkisquisher@lemmy.nz
      link
      fedilink
      English
      arrow-up
      22
      ·
      5 days ago

      Once these companies have to start charging what it really costs to maintain and run these huge models. The number of use cases will shrivel.

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        Models are becoming more optimized. I’ve recently tried LFM2.5, small version, and it’s ridiculously close in usefulness to Qwen3.5, for example. Or RNJ-1.

        To maintain, meaning actualized datasets - well, sort of expensive, but they were assembling those as a side effect of their main businesses.

        So this is not what’ll kill them. Their size will. These are very big companies with lots of internal corruption and inefficiency pulling them down. And a few new AI companies, which, I think, are going to survive, they are centered around specific products, some will die, but I’d expect LiquidAI or Anthropic or such to still be around some time after the crash.

        The crash might coincide with a bubble burst, but notice how this family of technologies really is delivering results. Instead of a bunch of specialized applications people are asking LLMs and getting often good enough answers. LLM agents can retrieve data from web services, perform operations, assist in using tools.

        You shouldn’t look at the big ones in the cloud, rather at what value local LLMs give you for energy spent. Right now it’s not that good, but approaching good honestly. I don’t feel like they’ve stopped becoming better. Human time is still more expensive. The tools are there, and are being improved, and the humans are slowly gaining experience in using them, and that makes them more efficient in various tasks.

        It’s for all kinds of reference and knowledge tools what Google was for search.

        And there’s one just amazing thing about these models - they are self-contained, even if some can use tools to access external sources. Our corporate overlords have been building a dependent networked world for 20 years, simply to break it by popularizing a technology that almost neuters that. They were thinking, probably, that they were reaping the crops of the web for themselves, instead they taught everyone that you don’t have to eat at the diner, you can take the food home.

        • CileTheSane@lemmy.ca
          link
          fedilink
          English
          arrow-up
          25
          ·
          5 days ago

          Only people who know very little about a field feel like AI “is good enough” for that field. Experts in a field will universally say that AI is shit in their field.

          LLMs are the extreme example of “the dumb man’s idea of a smart man.” It sounds like it knows what it’s talking about so people ignorant on the subject don’t know it’s full of shit.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            5 days ago

            I agree with you and I consider it similar to the ‘hollywood effect’: Ask any expert to review typical depictions of their expertise in film and tv and they will mostly groan at the inaccuracies that most people won’t catch.

            Problem is that if you compare the works that do it ‘right’ to the ones that do it ‘wrong’, there’s no correlation between doing it right and being more popular, the horribly wrong depictions get plenty of ratings regardless.

            Now one might reasonably argue ‘sure, but that’s purely fiction anyway, if it had real consequences, that would actually matter’, except it constantly happens in real world situations.

            My work colleague picked up his car from some mechanic chain after having it ‘fixed’ and took us to lunch. There was just this awful squeal as he started the car and I said why is it making that noise after just getting fixed and the guy said “Oh, the staff told me that cars just sound like that after a repair until the parts break in” and that bullshit worked to get him to pay and walk out the door. I ask if I can take a quick look under his hood and there was a flashlight wedged against a belt. He just laughed it off and said “hey, free flashlight, thanks for figuring that out” and a few months later he had mentioned going back to the exact same place for something else.

            A few days ago I went to a hardware store and their site said they had it, but under location it said “see associate”. The first one checked his device and didn’t understand what the deal was so he said “Oh, go over there and ask John, he knows all this stuff”. Ok, so I walk over to John, who takes one glance and confidently says “oh yeah, that stuff is in a cage in the back row locked up, just go up to the cage and press the button to get someone to get it”. I think “ok, good, a guy who really knows his stuff and the other staff recognize him for it”. I roll up to the cage and look in and realize “uh oh, this is not the type of stuff I’m looking for, he made a pretty amateur mistake”, but I push the button anyway. I show my phone to the guy who comes up and said that “John” said it would be here but I couldn’t see it, and at the mention of “John” the guy clearly rolled his eyes and it was abundantly clear that John’s “expertise” was a repeated annoyance for the guy. The actual answer is they kept that stuff in back and the employees all are supposed to see the notation in their devices telling them this, but none of them seem to figure it out and John just keeps sending people to his department instead.

            This has also come out in use of AI. I offered that my group could crank out a quick tool to do something that could be a problem, and one of the people said “in this new era, we don’t need you for this quick tool, I just asked Claude and it made me this application”. So I tested it and reported that ‘a’, it didn’t actually work, it produced stuff that looked right, but the actual tool wouldn’t accept it because it didn’t se the right syntax, and ‘b’, if t did work, it faked authentication and had a huge vulnerability. He just laughed it off and said ‘guess LLMs sometimes aren’t perfect yet’, no consequences for what could have been a disastrous tool, no severe change in stance on using LLMs, and I am pretty sure the audience probably found the response about it not working to be annoyingly buzzkill and were rooting for the LLM to do all the work instead. People who need your expertise are desparate to not need your expertise anymore and are willing to believe anything to enable that, and are willing to accept a lot of badness just to not be dependent on you.

            AI produce what is seen as plausible narrative, and plausible narrative can win even when the facts are against it. To be very charitable, a quick “usually” correct answer is indeed frequently “good enough” for a lot of purposes, and LLM’s speed at generating output can’t be beat.

          • Croquette@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            ·
            5 days ago

            The problem is that there is a lot if these people that thinks LLMs are good enough, and many of them are in decisional positions, so we’re getting raked no matter what.

          • iegod@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            A lot of fields don’t require doctorate levels of expertise to render effective business services. I’ve seen first hand companies replace thousands of employees and shutter divisions because their AI counterpart has been doing the job quantitatively equally, and faster. Perfect is the enemy of good enough, in most cases, as they say.

            Lemmy is filled to the brim with llm haters but you’re not only a minority, you’re probably also closing doors on the future trajectory of tech in business.

            • CileTheSane@lemmy.ca
              link
              fedilink
              English
              arrow-up
              5
              ·
              5 days ago

              Lemmy is filled to the brim with llm haters but you’re not only a minority, you’re probably also closing doors on the future trajectory of tech in business.

              “Think of the shareholder value of firing all these people!”

              Also, I call bullshit. I’ve seen many cases of companies replacing their staff with AI, then a month later desperately trying to hire staff again because the AI is good at "looking like* it can do the job but once in use turns out it’s complete shit.

              • iegod@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 days ago

                “Think of the shareholder value of firing all these people!”

                This is of course problematic, but not directly the fault of the technology itself. The entire system is problematic, but that’s a digression from the effectiveness of the tech doing the job.

                And the instances I’m talking about were running the ai stack and employee teams in parallel for nearly a year. The replacement wasn’t a “yeah let’s try this… whoops that didn’t work”. It was a tried and tested approach, and the employees made redundant (in the capability sense, not the firing sense, which followed afterwards).

                • CileTheSane@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  5 days ago

                  And I give it less then a year before the “oh shit, we really should have human’s overseeing this” hits

            • Hanrahan@slrpnk.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 days ago

              perhaps but one example, Commonwealth Bank (largest Australian Bank and in the top 10 worldwide AFAIK) in Australia said they were dismissing 1000’s of staff because of AI, turned out they were just offshoring. The latter is seen positively apparently, the former not so much.

            • CileTheSane@lemmy.ca
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              5 days ago

              I agree anyone using an LLM is a bad craftsman, because they’re using a hammer to drive in a screw.

              • vacuumflower@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                5 days ago

                All LLMs are using a tool for the wrong task then, in your opinion? So in the composite object of “LLM” what is the tool and what is the task?

                • CileTheSane@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  5 days ago

                  So in the composite object of “LLM” what is the tool and what is the task?

                  The tool is “Language Learning Model” and the task is “Learn language and mimic human speech.”

                  The task is not “Provide accurate information” or “write code” or “provide legal advice” or “Diagnose these symptoms” or “provide customer service” or “manage a database”.

        • moto@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          I like local LLMs as much as the next person but the issue is that doesn’t scale the way companies need it to.

          As a personal assistant? Sure, I agree. They’re useful at times. But as soon as you need multiple to run simultaneously you’re gonna hit resource issues.

          What Oracle and others were banking on is that you have engineers and others running a lot of agents in parallel composing different things together. Or having one input that multiple serverside agents take and execute numerous tasks on. That’s something you can’t run on an individual machine right now. And with the way they currently work I don’t envision they will anytime soon.

          • vacuumflower@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            There are lightweight models as good as some heavier ones. It’s a bit like Intel’s tick-tock advertised process. Heavy memory-hungry models are “tick”, but there’s “tock”- say, “lfm2.5-thinking” model, the light version, in the ollama repository seems almost as good as qwen3.5 for me, except it’s very lightweight and lightning-fast compared to that.

            These things are being optimized. It’s just that in the market capture phase nobody bothered.

            That they are not being used correctly - yeah, absolutely, my idea of their proper use is some graph-based system with each node being processed by a select LLM (or just piece of logic) with select set of tools and actions and choices available for each. A bit like ComfyUI, but something saner than a zoom-based web UI. Like MacOS Automator application, rather.

        • anomnom@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          Even if local models are good, the big companies are making local computing more expensive than cloud tokens by colluding with ram and storage makers to restrict supply.