I haven’t used Opera since they switched from their own engine to chrome. They are now owned by a Chinese company, so it probably has at least as much tracking built into it as Google Chrome now.
I miss old opera before the buyout
That’s essentially Vivaldi now.
Apart from it being chromium based 😕
Have a look at Otter browser It aims to replicate the old interface. It is using QtWebEngine as Presto was closed source. It is in development since 10 years now. And it is open source.
QtWebEngine is Chromium :(
It’s Chromium all the way down.
Qt WebEngine uses code from the Chromium project. However, it is not containing all of Chrome/Chromium: Binary files are stripped out Auxiliary services that talk to Google platforms are stripped out, Source
While that’s one of the reasons I don’t want to use chromium, it’s not actually the main reason, if so I’d just use Ungoogled Chromium. I just want more web engines, and I dont want google to monopolise the internet.
Why the hell do I need this in a web browser? Why isn’t it a stand alone app?
If you think of LLMs as a thing to replace search bars then this kind of makes sense.
If you think of LLMs as a thing to replace search bars
I don’t.
Just more unnecessary browser bloat.
Like search bars.
The more search bars the faster your internet becomes!
This is true. I asked my LLM.
There are plenty of stand-alone LLM apps.
Intresting. But I’m curious about the performance.
A bigger LLM (mixtral) already struggles to run on my mid-range gaming PC. Trying to run an LLM that isn’t terrible on a standard laptop wouldn’t be a good experience.
I have no idea how this is set up to work technically, but most of the heavy lifting is gonna be on the GPU. I’m not sure that it matters much whether the browser is what’s pushing data to the GPU or some other package.
Most people probably don’t have a dedicated GPU and an iGPU is probably not powerfull enough to run an LLM at decent speed. Also a decent model requires like 20GB of RAM which most people don’t have.
It doesn’t just require 20GB of RAM, it requires that in VRAM. Which is a much higher barrier to entry.
What’s LLM?
https://en.wikipedia.org/wiki/Large_language_model
A lot of the “AI” stuff that’s been in the news recently, chatbots and image generation and such, are based on LLMs.
Thats a cool feature for sure but I don’t trust opera.
Can’t they just stick to normal browser things like gaming integrations?