Yes…? All are except Microsoft, which is why most companies I work with aren’t looking that way.
Yes…? All are except Microsoft, which is why most companies I work with aren’t looking that way.
I know several large companies looking to Microsoft, Xen, and Proxmox. Though the smart ones are more interested in the open source solutions to avoid future rug-pulls.
2009 era was also when Intel leveraged their position in the compiler market to cripple all non-Intel processors. Nearly every benchmarking tool used that complier and put an enormous handicap on AMD processors by locking them to either no SSE or, later, back to SSE2.
My friends all thought I was crazy for buying AMD, but accusations had started circulating about the complier heavily favoring Intel at least as early as 2005, and they were finally ordered to stop in 2010 by the FTC… Though of course they have been caught cheating in several other ways since.
Everyone has this picture in their heads of AMD being the scrappy underdog and Intel being the professional choice, but Intel hasn’t really worn the crown since the release of Athlon. Except during Bulldozer/Piledriver, but who can blame AMD for trying something crazy after 10 years of frustration?
I host my own to avoid running into timeouts, fairly easy
MRSA infection following hospital admittance for Pneumonia. That shit is serious and way more prevalent than people think, it’s just that it usually kills people who are already terminally ill.
Unlikely to be an assassination. But not impossible. Either way, looks very bad.
The recommendation to shareholders from the independent advisor who proxies Boeing is to vote out several board members who are responsible for safety and QA. Crazy to see at a Fortune 100.
You found one video supporting your viewpoint. Kaspersky’s role in Russian intelligence has been an open secret since the mid 2010s. This is Facebook Anti-Vaxxer “research” methodology.
Amazingly, for someone so eager to give a lesson in linguistics, you managed to ignore literal definitions of the words in question and entirely skip relevant information in my (quite short) reply.
Both are widely used in that context. Language is like that.
Further, the textbook definition of Stability-
the quality, state, or degree of being stable: such as
a: the strength to stand or endure : firmness
b: the property of a body that causes it when disturbed from a condition of equilibrium or steady motion to develop forces or moments that restore the original condition
c: resistance to chemical change or to physical disintegration
Pay particular attention to “b”.
The state of my system is “running”. Something changes. If the system doesn’t continue to be state “running”, the system is unstable BY TEXTBOOK DEFINITION.
Both are widely used in that context. Language is like that.
I think the confusion comes from the meaning of stable. In software there are two relevant meanings:
Unchanging, or changing the least possible amount.
Not crashing / requiring intervention to keep running.
Debian, for example, focuses on #1, with the assumption that #2 will follow. And it generally does, until you have to update and the changes are truly massive and the upgrade is brittle, or you have to run software with newer requirements and your hacks to get it working are brittle.
Arch, for example, instead focuses on the second definition, by attempting to ensure that every change, while frequent, is small, with a handful of notable exceptions.
Honestly, both strategies work well. I’ve had debian systems running for 15 years and Arch systems running for 12+ years (and that limitation is really only due to the system I run Arch on, rather than their update strategy.
It really depends on the user’s needs and maintenance frequency.
Over the years of using Vim both professionally and for my own uses, I’ve learned to just install LunarVim and only add a handful of packages/overrides. Otherwise I just waste too much time tinkering and not doing the things I need to.