This is a general tech community, mostly centered around news and end-user technology discussions, so it’s very unlikely you’ll get an answer here. Might want to try asking on Reddit or some dedicated Datto/Connectwise forum.
This is a general tech community, mostly centered around news and end-user technology discussions, so it’s very unlikely you’ll get an answer here. Might want to try asking on Reddit or some dedicated Datto/Connectwise forum.
Considering that predicting the next word from context is the one thing LLMs are really good at, I just don’t understand how none of these developments have found their way into predictive keyboards.
The problem is that LLMs require a considerable amount of computing power to run, unlike the simple markov chain predictions that keyboards use. You could use a cloud-based service like ChatGPT or something, but most people wouldn’t want their keyboards to send all their keystrokes to a remote server… and even if they didn’t know or care, the response time wouldn’t be good enough for real-time predictions.
Now smartphone SoC makers like Qualcomm have started adding NPUs (neural processing units) with their latest chips (such as the SD8 Gen 3, featured in the most recent flagship phones), but it’s going to take a while before devices with NPUs become commonplace, and it’ll take a while for developers to start making/updating apps that can make use of it.
But yeah the good news is that it is coming, it’s only a matter of “when” - I suspect it won’t be long before the likes of SwiftKey start to take advantage of this.
The bypassnro command still works though. Installed 23H2 in a VM yesterday and it worked fine.
If you were looking for answers to such questions 10 years ago, your best resource for finding a thorough, expert-informed response likely would have been one of the most interesting and longest-lasting corners of the internet: Quora.
I disagree, the best place for such answers used to be Reddit, and Stack Exchange for the techy stuff. Quora always felt like cancer for some reason and I never really used it.
That’s an issue/limitation with the model. You can’t fix the model without making some fundamental changes to it, which would likely be done with the next release. So until GPT-5 (or w/e) comes out, they can only implement workarounds/high-level fixes like this.
In the footnotes they mention GPT-3.5. Their argument for not testing 4 was because it was paid, and so most users would be using 3.5 - which is already factually incorrect now because the new GPT-4o (which they don’t even mention) is now free. Finally, they didn’t mention GPT-4 Turbo either, which is even better at coding compared to 4.