I doubt they’ll ever come to Europe. They don’t meet even the most basic crash safety standards. These things are designed to annihilate pedestrians, not to try to reduce harm.
I doubt they’ll ever come to Europe. They don’t meet even the most basic crash safety standards. These things are designed to annihilate pedestrians, not to try to reduce harm.
It needs to be way way better than ‘better than average’ if it’s ever going to be accepted by regulators and the public. Without better sensors I don’t believe it will ever make it. Waymo had the right idea here if you ask me.
If anyone was somehow still thinking RoboTaxi is ever going to be a thing. Then no, it’s not, because of reasons like this.
Todays news
https://www.turkiyetoday.com/turkiye/turkiye-bans-discord-amid-concerns-over-platform-safety-62896/
For now, Discord users in Türkiye face limited access to the platform, though it remains unclear whether a full ban will be implemented in the coming days.
Still working for me on hotel wifi.
Edit: it won’t launch on my laptop now. Stuck trying to update. Still works on my phone.
I’m sure I’ll never need all 4 of the SCART cables I have, but then maybe I will so I’ll hang onto them just in case.
They’ve committed to support AM5 (the LGA socket launched 2022) through at least 2027.
“We envision other types of more complex guardrails should exist in the future, especially for agentic use cases, e.g., the modern Internet is loaded with safeguards that range from web browsers that detect unsafe websites to ML-based spam classifiers for phishing attempts,” the research paper says.
The thing is folks know how the safeguards for the ‘modern internet’ actually work and are generally straightforward code. Where as LLMs are kinda the opposite, some mathematical model that spews out answers. Product managers thinking it can be corralled to behave in a specific, incorruptible way, I suspect will be disappointed.
There are no M1 devices with less than 8GB of RAM.
The A16 Bionic has as Neural Engine capable of 17 TOPS but 6GB of RAM.
The M1 had a Neural Engine capable of just 11 TOPS but all M1 chips have at least 8GB of RAM.
So the model could run on an A16 Bionic if it had 8GB of RAM as it has 54% more TOPS than the M1, but it only has 6GB of RAM. Apple have clearly decided that a model small enough to fit just wouldn’t give good enough results.
Maybe as research progresses they’ll find a way to make it work with a model with fewer parameters but I’m not going to hold my breath.
Yeah I thought it was a NPU tops issue that’s keeping it off the 17 non pro. However since it runs on a M1 I think it’s more to do with needing 8GB RAM to fit the model.
He called the software integration between the two companies “an unacceptable security violation,” and said Apple has “no clue what’s actually going on.”
I’d be very surprised if corporates wouldn’t just be able to disable it in MDM for their worker’s phones. Not sure it’s Apple who has ‘no clue’ here.
If they keep burning $100k/w on their Vercel bill they might not be around that long anyway!
Yup. Investors have convinced themselves that this time AI development is going to grow exponentially. The breathless fantasies they’ve concocted for themselves require it. They’re going to be disappointed.