Make it two:
emerge firefox
(Gentoo users only)
Make it two:
emerge firefox
(Gentoo users only)
Not yet. It can lead to that point, but this is just the kernel handling an “out of memory” situation. The kernel in the screenshot is configured to run its OOM reaper / OOM killer.
The OOM reaper checks all running processes and looks for the one that causes the least disruption when killed. It does that by calculating a score which is based on the amount of memory a process uses, how recently it was launched and so on. Ideally, a Linux desktop user would simply see their video game, browser or media player close.
This smart TV is in real trouble, though, it probably already killed its OSD, still didn’t even have enough memory to spawn a login shell and is now making short work of strange VLC instances that probably got left behind by a poorly written app store app :)
I think you’re mistaken there.
Wine is a vanilla Linux executable that runs as the user who launched it. The Windows program it runs thus also runs under that user. That’s possible because Wine doesn’t do anything system-wide (like intercepting calls or anything), it already gave the process its own version of i.e. LoadLibrary()
(the Windows API function to load a DLL) and can happily remap any loaded DLL to Wine’s reimplementation of said DLL as needed.
Here are, for example, the processes created when I run Paint Shop Pro on my system (the leftmost column indicates the user each process is running as):
Also, some advice from WineHQ:
After reading, the gist of it seems to be:
.
In short, just another out of touch entrepreneur who sells snake oil cures to people suffering in the current system, so that they may invite in the boot that stomps them down for good.
What would be missing from VS Code or VS Codium that an IDE needs?
I’m an ex Visual Studio user, now writing all my code in VS Codium. I organize my project tree in VS Codium, I build from it and, like a Visual Studio user, I press F5 to debug, set breakpoints and inspect variables.
And that’s just the default install using the vanilla C/C++ extension it ships with, not some complicated setup that takes any time to get working.
I am a Gentoo user and most of that is already a reality on Gentoo systems. Get the stage3 tarball set up, slap your /etc/portage/make.conf
and /var/lib/portage/world
files in there and build.
Obviously, depending on whether it should be a blank system with the same apps installed or a clone of a previous system, configuration in /etc
and one’s home directory may need to be copied, too.
I love that example. Microsoft’s Copilot (based on GTP-4) immediately doesn’t disappoint:
It’s annoying that for many things, like basic programming tasks, it manages to generate reasonable output that is good enough to goat people into trusting it, yet hallucinates very obviously wrong stuff or follows completely insane approaches on anything off the beaten path. Every other day, I have to spend an hour to justify to a coworker why I wrote code this way when the AI has given him another “great” suggestion, like opening a hidden window with an UI control to query a database instead of going through our ORM.
I assume that Twitter still has tons of managers and team leads that allowed this and have their own part of the responsibility. However, Musk is known to be a choleric with a mercurial temper, someone who makes grand public announcements and then pushes his companies to release stuff that isn’t nearly ready for production. Often it’s “do or get fired”.
So… an unshackled AI generating official posts, no human hired to curate the front page, headlines controlled through up-voting by trolls and foreign influence campaigns, all running unchecked in the name of “free speech” – that’s very much on brand for a Musk-run business, I’d say.
I usually compile with --quiet-build=y
, it doesn’t have to be configures and makefiles blasting into a shell window the whole time. On the rare occasions where a build fails there’s still the log in /var/tmp/portage/...
.
A perfect demonstration of how Russian indoctrination works right here.
Original reporting: A major disinfo attack against Europe being prepared by Russia is uncovered through diligent investigation and published and reported on.
The response:
Emotional framing:
Nationalists, agricultural owner-operators, and farmers exposed to rising interest rates
“truckloads of exported Ukranian agricultural salvage” vs. “fresh French produce”
we’re getting an earful about how all these local yokels are hoodwinked by anti-EU Russian Propaganda
Macron for selling out the agg sector to financial interests in Brussels
“If you’re not in favor of (insert supposed evil acts described in lurid way), then you’re a secret spy for Putin and a traitor.”
Result: The reader comes out the other end an angry person, outraged about the plight of farmers, outraged again at disinfo reports supposedly serving to silence them, outraged once more at a France politician selling them out to the EU, EU painted as high-and-mighty villain, automatic anger against anyone who tells them a different viewpoint ready to trigger.
I agree that a lot of human behavior (on the micro as well as macro level) is just following learned patterns. On the other hand, I also think we’re far ahead - for now - in that we (can) have a meta context - a goal and an awareness of our own intent.
For example, when we solve a math problem, we don’t just let intuitive patterns run and blurt out numbers, we know that this is a rigid, deterministic discipline that needs to be followed. We observe and guide our own thought processes.
That requires at least a recurrent network and at higher levels, some form of self awareness. And any LLM is, when it runs (rather than being trained), completely static, feed-forward (it gets some 2000 words (or 32000+ as of GPT-4 Turbo) fed to its input synapses, each neuron layer gets to fire once and then the final neuron layer contains the likelihoods for each possible next word.)
Is this a case of “here, LLM trained on millions of lines of text from cold war novels, fictional alien invasions, nuclear apocalypses and the like, please assume there is a tense diplomatic situation and write the next actions taken by either party” ?
But it’s good that the researchers made explicit what should be clear: these LLMs aren’t thinking/reasoning “AI” that is being consulted, they just serve up a remix of likely sentences that might reasonably follow the gist of the provided prior text (“context”). A corrupted hive mind of fiction authors and actions that served their ends of telling a story.
That being said, I could imagine /some/ use if an LLM was trained/retrained on exclusively verified information describing real actions and outcomes in 20th century military history. It could serve as brainstorming aid, to point out possible actions or possible responses of the opponent which decision makers might not have thought of.
That has been Russia’s game for more than a decade now: stoke existing tensions. Brexit, political polarization in the USA and internal division in nearly all European countries.
Bringing the already uneasy situation between Israel/Palestine to a boiling point in order to distract from Russia’s war in Ukraine is not a big stretch.
I’m the weird one in the room. I’ve been using 7z for the last 10-15 years and now
.tar.zst
, after finding out that ZStandard achieves higher compression than 7-Zip, even with 7-Zip in “best” mode, LZMA version 1, huge dictionary sizes and whatnot.zstd --ultra -M99000 -22 files.tar -o files.tar.zst