• surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 hours ago

    You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

  • Grizzlyboy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    What a dumb title. I proved it by asking a series of questions. It’s not AI, stop calling it AI, it’s a dumb af language model. Can you get a ton of help from it, as a tool? Yes! Can it reason? NO! It never could and for the foreseeable future, it will not.

    It’s phenomenal at patterns, much much better than us meat peeps. That’s why they’re accurate as hell when it comes to analyzing medical scans.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 hours ago

      That’s not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 hours ago

        TBH idk how people can convince themselves otherwise.

        They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        3 hours ago

        I think because it’s language.

        There’s a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking “if you put in the wrong figures, will the correct ones be output” and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

        People are people, the main thing that’s changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

        And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 hours ago

          I often feel like I’m surrounded by idiots, but even I can’t begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

  • Jhex@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    5 hours ago

    this is so Apple, claiming to invent or discover something “first” 3 years later than the rest of the market

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    5 hours ago

    Why would they “prove” something that’s completely obvious?

    The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

    • TheRealKuni@midwest.social
      link
      fedilink
      English
      arrow-up
      17
      ·
      3 hours ago

      Why would they “prove” something that’s completely obvious?

      I don’t want to be critical, but I think if you step back a bit and look and what you’re saying, you’re asking why we would bother to experiment and prove what we think we know.

      That’s a perfectly normal and reasonable scientific pursuit. Yes, in a rational society the burden of proof would be on the grifters, but that’s never how it actually works. It’s always the doctors disproving the cure-all, not the snake oil salesmen failing to prove their own prove their own product.

      There is value in this research, even if it fits what you already believe on the subject. I would think you would be thrilled to have your hypothesis confirmed.

    • yeahiknow3@lemmings.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      5 hours ago

      They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        5 hours ago

        I understand that people in this “field” regularly use pseudo-scientific language (I actually deleted that part of my comment).

        But the terminology has never been suitable so it shouldn’t be used in the first place. It pre-supposes the hypothesis that they’re supposedly “disproving”. They’re feeding into the grift because that’s what the field is. That’s how they all get paid the big bucks.

  • Nanook@lemm.ee
    link
    fedilink
    English
    arrow-up
    153
    ·
    8 hours ago

    lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

      • kadup@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        6 hours ago

        Apple is significantly behind and arrived late to the whole AI hype, so of course it’s in their absolute best interest to keep showing how LLMs aren’t special or amazingly revolutionary.

        They’re not wrong, but the motivation is also pretty clear.

        • Optional@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          4 hours ago

          “Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

        • MCasq_qsaCJ_234@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          5 hours ago

          They need to convince investors that this delay wasn’t due to incompetence. The problem will only be somewhat effective as long as there isn’t an innovation that makes AI more effective.

          If that happens, Apple shareholders will, at best, ask the company to increase investment in that area or, at worst, to restructure the company, which could also mean a change in CEO.

        • dubyakay@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 hours ago

          Maybe they are so far behind because they jumped on the same train but then failed at achieving what they wanted based on the claims. And then they started digging around.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      7 hours ago

      "It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´.
      It’s called the AI Effect.

      As Larry Tesler puts it, “AI is whatever hasn’t been done yet.”.

      • vala@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 hours ago

        Yesterday I asked an LLM “how much energy is stored in a grand piano?” It responded with saying there is no energy stored in a grad piano because it doesn’t have a battery.

        Any reasoning human would have understood that question to be referring to the tension in the strings.

        Another example is asking “does lime cause kidney stones?”. It didn’t assume I mean lime the mineral and went with lime the citrus fruit instead.

        Once again a reasoning human would assume the question is about the mineral.

        Ask these questions again in a slightly different way and you might get a correct answer, but it won’t be because the LLM was thinking.

        • postmateDumbass@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          20 minutes ago

          Honestly, i thought about the chemical energy in the materials constructing the piano and what energy burning it would release.

        • antonim@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        5 hours ago

        I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so take that! \s

        Seriously tho. This person is arguing that a checkers program is “AI”. It kinda demonstrates the loooong history of this grift.

        • JohnEdwa@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          5 hours ago

          It is. And has always been. “Artificial Intelligence” doesn’t mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it’s a vast field of research in computer science with many, many things under it.

          • Endmaker@ani.social
            link
            fedilink
            English
            arrow-up
            6
            ·
            5 hours ago

            ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

            Y’all are too patient. I can’t be bothered to spend the time to give people free lessons.

            • antonim@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              53 minutes ago

              Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we’re bombarded with daily through the media.

        • LandedGentry@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          5 hours ago

          Yeah that’s exactly what I took from the above comment as well.

          I have a pretty simple bar: until we’re debating the ethics of turning it off or otherwise giving it rights, it isn’t intelligent. No it’s not scientific, but it’s a hell of a lot more consistent than what all the AI evangelists espouse. And frankly if we’re talking about the ethics of how to treat something we consider intelligent, we have to go beyond pure scientific benchmarks anyway. It becomes a philosophy/ethics discussion.

          Like crypto it has become a pseudo religion. Challenges to dogma and orthodoxy are shouted down, the non-believers are not welcome to critique it.

      • kadup@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        6 hours ago

        That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they’re clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 hours ago

          No, it shows how certain people misunderstand the meaning of the word.

          You have called npcs in video games “AI” for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

          • technocrit@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            5 hours ago

            Who is “you”?

            Just because some dummies supposedly think that NPCs are “AI”, that doesn’t make it so. I don’t consider checkers to be a litmus test for “intelligence”.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              5 hours ago

              “You” applies to anyone that doesnt understand what AI means. It’s a portmanteau word for a lot of things.

              Npcs ARE AI. AI doesnt mean “human level intelligence” and never did. Read the wiki if you need help understanding.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 hours ago

      This is why I say these articles are so similar to how right wing media covers issues about immigrants.

      There’s some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They’re taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There’s articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.

      Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        5 hours ago

        This is why I say these articles are so similar to how right wing media covers issues about immigrants.

        Maybe the actual problem is people who equate computer programs with people.

        Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

        You mean laws like this? jfc.

        https://www.inc.com/sam-blum/trumps-budget-would-ban-states-from-regulating-ai-for-10-years-why-that-could-be-a-problem-for-everyday-americans/91198975

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 hours ago

          Literally what I’m talking about. They have been pushing anti AI propaganda to alienate the left from embracing it while the right embraces it. You have such a blind spot you this, you can’t even see you’re making my argument for me.

          • antonim@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            56 minutes ago

            That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that’s actually supposed to mean).

            • Melvin_Ferd@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 minutes ago

              What isn’t there to gain?

              Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

              We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

              Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

              Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

      • hansolo@lemmy.today
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 hours ago

        Because it’s a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won’t kill us all is the hard part.

        I’m a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven’t been nuked by SkyNet, all of this will look quaint and silly.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    36
    ·
    7 hours ago

    You know, despite not really believing LLM “intelligence” works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point…

    But that study seems to prove they’re still not even good at that. At first I was wondering how hard the puzzles must have been, and then there’s a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar… Also, failing to apply a step-by-step solution they were given.

    • auraithx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      28
      ·
      7 hours ago

      This paper doesn’t prove that LLMs aren’t good at pattern recognition, it demonstrates the limits of what pattern recognition alone can achieve, especially for compositional, symbolic reasoning.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      5 hours ago

      Computers are awesome at “recognizing patterns” as long as the pattern is a statistical average of some possibly worthless data set. And it really helps if the computer is setup to ahead of time to recognize pre-determined patterns.

  • sev@nullterra.org
    link
    fedilink
    English
    arrow-up
    40
    ·
    8 hours ago

    Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real “reasoning” processes.

    • auraithx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 hours ago

      Unlike Markov models, modern LLMs use transformers that attend to full contexts, enabling them to simulate structured, multi-step reasoning (albeit imperfectly). While they don’t initiate reasoning like humans, they can generate and refine internal chains of thought when prompted, and emerging frameworks (like ReAct or Toolformer) allow them to update working memory via external tools. Reasoning is limited, but not physically impossible, it’s evolving beyond simple pattern-matching toward more dynamic and compositional processing.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        previous input goes in. Completely static, prebuilt model processes it and comes up with a probability distribution.

        There is no “unlike markov chains”. They are markov chains. Ones with a long context (a markov chain also kakes use of all the context provided to it, so I don’t know what you’re on about there). LLMs are just a (very) lossy compression scheme for the state transition table. Computed once, applied blindly to any context fed in.

        • auraithx@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          LLMs are not Markov chains, even extended ones. A Markov model, by definition, relies on a fixed-order history and treats transitions as independent of deeper structure. LLMs use transformer attention mechanisms that dynamically weigh relationships between all tokens in the input—not just recent ones. This enables global context modeling, hierarchical structure, and even emergent behaviors like in-context learning. Markov models can’t reweight context dynamically or condition on abstract token relationships.

          The idea that LLMs are “computed once” and then applied blindly ignores the fact that LLMs adapt their behavior based on input. They don’t change weights during inference, true—but they do adapt responses through soft prompting, chain-of-thought reasoning, or even emulated state machines via tokens alone. That’s a powerful form of contextual plasticity, not blind table lookup.

          Calling them “lossy compressors of state transition tables” misses the fact that the “table” they’re compressing is not fixed—it’s context-sensitive and computed in real time using self-attention over high-dimensional embeddings. That’s not how Markov chains work, even with large windows.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 hour ago

            their input is the context window. Markov chains also use their whole context window. Llms are a novel implementation that can work with much longer contexts, but as soon as something slides out of its window, it’s forgotten. just like any other markov chain. They don’t adapt. You add their token to the context, slide the oldest one out and then you have a different context, on which you run the same thing again. A normal markov chain will also give you a different outuut if you give it a different context. Their biggest weakness is that they don’t and can’t adapt. You are confusing the encoding of the context with the model itself. Just to see how static the model is, try setting temperature to 0, and giving it the same context. i.e. only try to predict one token with the exact same context each time. As soon as you try to predict a 2nd token, you’ve just changed the input and ran the thing again. It’s not adapting, you asked it something different, so it came up with a different answer

            • auraithx@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 hour ago

              While both Markov models and LLMs forget information outside their window, that’s where the similarity ends. A Markov model relies on fixed transition probabilities and treats the past as a chain of discrete states. An LLM evaluates every token in relation to every other using learned, high-dimensional attention patterns that shift dynamically based on meaning, position, and structure.

              Changing one word in the input can shift the model’s output dramatically by altering how attention layers interpret relationships across the entire sequence. It’s a fundamentally richer computation that captures syntax, semantics, and even task intent, which a Markov chain cannot model regardless of how much context it sees.

        • auraithx@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          10
          ·
          7 hours ago

          The paper doesn’t say LLMs can’t reason, it shows that their reasoning abilities are limited and collapse under increasing complexity or novel structure.

          • technocrit@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            5 hours ago

            The paper doesn’t say LLMs can’t reason

            Authors gotta get paid. This article is full of pseudo-scientific jargon.

          • snooggums@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 hours ago

            I agree with the author.

            If these models were truly “reasoning,” they should get better with more compute and clearer instructions.

            The fact that they only work up to a certain point despite increased resources is proof that they are just pattern matching, not reasoning.

            • auraithx@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              7
              ·
              6 hours ago

              Performance eventually collapses due to architectural constraints, this mirrors cognitive overload in humans: reasoning isn’t just about adding compute, it requires mechanisms like abstraction, recursion, and memory. The models’ collapse doesn’t prove “only pattern matching”, it highlights that today’s models simulate reasoning in narrow bands, but lack the structure to scale it reliably. That is a limitation of implementation, not a disproof of emergent reasoning.

                • auraithx@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  5 hours ago

                  Brother you better hope it does because even if emissions dropped to 0 tonight the planet wouldnt stop warming and it wouldn’t stop what’s coming for us.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        I’m not convinced that humans don’t reason in a similar fashion. When I’m asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.

        Think about “normal” programming: An experienced developer (that’s self-trained on dozens of enterprise code bases) doesn’t have to think much at all about 90% of what they’re coding. It’s all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it’s nothing special.

        The remaining 10% is “the hard stuff”. They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.

        LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.

        Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, “Oh shit! Maybe AGI really is imminent!” But again, they’ll be wrong.

        AGI won’t happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 hours ago

      I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy “dataset” that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.

      But I don’t think we’re anywhere near there yet.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 hours ago

        The only reason we’re not there yet is memory limitations.

        Eventually some company will come out with AI hardware that lets you link up a petabyte of ultra fast memory to chips that contain a million parallel matrix math processors. Then we’ll have an entirely new problem: AI that trains itself incorrectly too quickly.

        Just you watch: The next big breakthrough in AI tech will come around 2032-2035 (when the hardware is available) and everyone will be bitching that “chain reasoning” (or whatever the term turns out to be) isn’t as smart as everyone thinks it is.

  • mfed1122@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    7 hours ago

    This sort of thing has been published a lot for awhile now, but why is it assumed that this isn’t what human reasoning consists of? Isn’t all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they’re “just” memorizing patterns don’t prove anything other than that, unless coupled with research on the human brain to prove we do something different.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 hours ago

      why is it assumed that this isn’t what human reasoning consists of?

      Because science doesn’t work work like that. Nobody should assume wild hypotheses without any evidence whatsoever.

      Isn’t all our reasoning ultimately a form of pattern memorization? I sure feel like it is.

      You should get a job in “AI”. smh.

      • mfed1122@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        Sorry, I can see why my original post was confusing, but I think you’ve misunderstood me. I’m not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement “AI doesn’t actually reason, it just follows patterns”. That is unscientific if we don’t know whether or “actually reasoning” consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It’s my personal, subjective feeling that human reasoning works by following patterns. But I’m not saying “AI does actually reason like humans because it follows patterns like we do”. Again, I see how what I said could have come off that way. What I mean more precisely is:

        It’s not clear whether AI’s pattern-following techniques are the same as human reasoning, because we aren’t clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn’t we have studies to back up the direction we lean in one way or the other?

        I think you and I are in agreement, we’re upholding the same principle but in different directions.

    • LesserAbe@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 hours ago

      Agreed. We don’t seem to have a very cohesive idea of what human consciousness is or how it works.

        • LesserAbe@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          38 minutes ago

          I think you’re misunderstanding the argument. I haven’t seen people here saying that the study was incorrect so far as it goes, or that AI is equal to human intelligence. But it does seem like it has a kind of intelligence. “Glorified auto complete” doesn’t seem sufficient, because it has a completely different quality from any past tool. Supposing yes, on a technical level the software pieces together probability based on overtraining. Can we say with any precision how the human mind stores information and how it creates intelligence? Maybe we’re stumbling down the right path but need further innovations.

    • Endmaker@ani.social
      link
      fedilink
      English
      arrow-up
      9
      ·
      7 hours ago

      You’ve hit the nail on the head.

      Personally, I wish that there’s more progress in our understanding of human intelligence.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        Their argument is that we don’t understand human intelligence so we should call computers intelligent.

        That’s not hitting any nail on the head.

    • count_dongulus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      Humans apply judgment, because they have emotion. LLMs do not possess emotion. Mimicking emotion without ever actually having the capability of experiencing it is sociopathy. An LLM would at best apply patterns like a sociopath.

      • mfed1122@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        6 hours ago

        But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we’re not looking for emotional judgements - we’re trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I’m not sure that emotions have much relevance to the topic of AI or human reasoning and problem solving, at least not this particular aspect of it.

        As for analogizing LLMs to sociopaths, I think that’s a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others’ feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we’re giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    7 hours ago

    This has been known for years, this is the default assumption of how these models work.

    You would have to prove that some kind of actual reasoning capacity has arisen as… some kind of emergent complexity phenomenon… not the other way around.

    Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      Define, “reasoning”. For decades software developers have been writing code with conditionals. That’s “reasoning.”

      LLMs are “reasoning”… They’re just not doing human-like reasoning.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 hours ago

        Howabout uh…

        The ability to take a previously given set of knowledge, experiences and concepts, and combine or synthesize them in a consistent, non contradictory manner, to generate hitherto unrealized knowledge, or concepts, and then also be able to verify that those new knowledge and concepts are actually new, and actually valid, or at least be able to propose how one could test whether or not they are valid.

        Arguably this is or involves meta-cognition, but that is what I would say… is the difference between what we typically think of as ‘machine reasoning’, and ‘human reasoning’.

        Now I will grant you that a large amount of humans essentially cannot do this, they suck at introspecting and maintaining logical consistency, that they are just told ‘this is how things work’, and they never question that untill decades later and their lives force them to address, or dismiss their own internally inconsisten beliefs.

        But I would also say that this means they are bad at ‘human reasoning’.

        Basically, my definition of ‘human reasoning’ is perhaps more accurately described as ‘critical thinking’.