It’s a fair point. Your assessment is missing one crucial piece of context: my last conversation with CowBee. It was really quite painful and I’m just not in the mood for another treatment.
terrific
just terrific
- 0 Posts
- 21 Comments
I’m sorry if I’m dismissive but I gotta tell you, last time we talked felt an awful lot like being lectured. You didn’t really engage with anything I said but rather regurgitated endless theories and facts.
And you are a self-proclaimed Marxist-Leninist, is that not true? Subscribing to a particular narrative is IMO exactly what “dogmatic” means. I’m not saying it’s wrong, it’s truer than most dogmas. But still a dogma.
Oh it’s you again. Last time we talked you lectured me about imperialism. I’m not really interested in a lecture today, or any day. We can have a conversation if you want, but I’m not going to subscribe to your dogma.
Americans think that the US is the centre of the universe 🙄
So Palantir sells a data management tool and deployment support. That shouldn’t really surprise anyone who knows the first thing about data science.
The interesting thing about Palantir isn’t what they sell but how they sell it and who buys it. They clearly market their unremarkable software as an autocrat’s wet dream.
And police and military departments across Europe and the US buy their shit, which says more about those police and military departments than about the software.
Anti-communism is a fancy name for fascism.
As someone who was forced to start using Windows again after ten years after ten years of exclusively running Linux: Why is it like this? Everything is so crappy and slow!
terrific@lemmy.mlto Technology@lemmy.world•What will the AI revolution mean for the global south?English25·24 days agoWhat AI revolution? All I get is fancy spellcheck and crappy image generation.
It’s hyperbole.
terrific@lemmy.mlto Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish2·1 month agoI think that’s a very generous use of the word “superintelligent”. They aren’t anything like what I associate with that word anyhow.
I also don’t really think they are knowledge retrieval engines. I use them extensively in my daily work, for example to write emails and generate ideas. But when it comes to facts they are flaky at best. It’s more of a free association game than knowledge retrieval IMO.
terrific@lemmy.mlto Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish3·1 month agoThat’s true in a somewhat abstract way, but I just don’t see any evidence of the claim that it is just around the corner. I don’t see what currently existing technology can facilitate it. Faster-than-light travel could also theoretically be just around the corner, but it would surprise me if it was, because we just don’t have the technology.
On the other hand, the people who push the claim that AGI is just around the corner usually have huge vested interests.
terrific@lemmy.mlto Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish1·1 month agoI’m not sure I can give a satisfying answer. There are a lot of moving parts here, and a big issue here is definitions which you also touch upon with your reference to Searle.
I agree with the sentiment that there must be some objective measure of reasoning ability. To me, reasoning is more than following logical rules. It’s also about interpreting the intent of the task. The reasoning models are very sensitive to initial conditions and tend to drift when the question is not super precise or if they don’t have sufficient context.
The AI models are in a sense very fragile to the input. Organic intelligence on the other hand is resilient and also heuristic. I don’t have any specific idea for the test, but it should test the ability to solve a very ill-posed problem.
terrific@lemmy.mlto Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish1·1 month agoI’m not saying that we can’t ever build a machine that can think. You can do some remarkable things with math. I personally don’t think our brains have baked in gradient descent, and I don’t think neural networks are a lot like brains at all.
The stochastic parrot is a useful vehicle for criticism and I think there is some truth to it. But I also think LMMs display some super impressive emergent features. But I still think they are really far from AGI.
terrific@lemmy.mlto Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish3·1 month agoI definitely think that’s remarkable. But I don’t think scoring high on an external measure like a test is enough to prove the ability to reason. For reasoning, the process matters, IMO.
Reasoning models work by Chain-of-Thought which has been shown to provide some false reassurances about their process https://arxiv.org/abs/2305.04388 .
Maybe passing some math test is enough evidence for you but I think it matters what’s inside the box. For me it’s only proved that tests are a poor measure of the ability to reason.
terrific@lemmy.mlto Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish8·1 month agoDo you have any expertise on the issue?
I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.
IMHO, there is simply nothing indicating that it’s close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current “reasoning models” still don’t actually reason. They are just LLMs with some extra steps.
There is lots of information out there on the topic so I’m not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.
terrific@lemmy.mlto Technology@lemmy.world•Human-level AI is not inevitable. We have the power to change courseEnglish52·1 month agoWe’re not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst 🤷
terrific@lemmy.mlto Lemmy Shitpost@lemmy.world•If you paid attention in History Class you would understand the reference67·1 month agoStonetoss is a nazi.
terrific@lemmy.mlto Technology@lemmy.world•We need to stop pretending AI is intelligentEnglish5·2 months agoNeural networks are about as much a model of a brain as a stick man is a model of human anatomy.
I don’t think anybody knows how we actually, really learn. I’m not a neuro scientist (I’m a computer scientist specialised in AI) but I don’t think the mechanism of learning is that well understood.
AI hype-people will say that it’s “like a neural network” but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.
terrific@lemmy.mlto Technology@lemmy.world•We need to stop pretending AI is intelligentEnglish8·2 months agoI know it’s part of the AI jargon, but using the word “learning” to describe the slow adaptation of massive arrays of single precision numbers to some loss function, is a very generous interpretation of that word, IMO.
terrific@lemmy.mlto Technology@lemmy.world•We need to stop pretending AI is intelligentEnglish31·2 months agoI’m a computer scientist that has a child and I don’t think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I don’t see in AI.
I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.
The core of your claim is basically that “people who don’t think AI is sentient don’t really understand sentience”. I think that’s both reductionist and, frankly, a bit arrogant.
Using phrasing such as “necessarily implies” is exactly what makes me call your conversation style “lecturing”.
Is it normal to talk like this in your circles? In my culture it’s a certain way to antagonize anyone who doesn’t already agree with you.