

Ughhh, I could go on forever, but to keep it short:
-
Tech bro enshittification: https://old.reddit.com/r/LocalLLaMA/comments/1p0u8hd/ollamas_enshitification_has_begun_opensource_is/
-
Hiding attribution to the actual open source project it’s based on: https://old.reddit.com/r/LocalLLaMA/comments/1jgh0kd/opinion_ollama_is_overhyped_and_its_unethical/
-
A huge support drain on llama.cpp, without a single cent, nor a notable contribution, given back.
-
Constant bugs and broken models from “quick and dirty” model support updates, just for hype.
-
Breaking standard GGUFs.
-
Deliberately misnaming models (like the Deepseek Qwen distills and “Deepseek”) for hype.
-
Horrible defaults (like ancient default models, 4096 context, really bad/lazy quantizations).
-
A bunch of spam, drama, and abuse on Linkedin, Twitter, Reddit and such.
Basically, the devs are Tech Bros. They’re scammer-adjacent. I’ve been in local inference for years, and wouldn’t touch ollama if you paid me to. I’d trust Gemini API over them any day.
I’d recommend base llama.cpp or ik_llama.cpp or kobold.cpp, but if you must use an “turnkey” and popular UI, LMStudio is way better.
But the problem is, if you want a performant local LLM, nothing about local inference is really turnkey. It’s just too hardware sensitive, and moves too fast.







Not me.
I wanna be there to report every attribution-cropped post I can find, at least in subs where it’s applicable. And repost it without the crop, then tell everyone to watch out for your posts.