

Is Lemmy getting bots?
I’m seeing more and more posts where OP posts some kind of engagement bait, yet hardly comments in any of the discussion. Yet I don’t know why that would be; karma isn’t used to gate posting like Reddit.


Is Lemmy getting bots?
I’m seeing more and more posts where OP posts some kind of engagement bait, yet hardly comments in any of the discussion. Yet I don’t know why that would be; karma isn’t used to gate posting like Reddit.
On Reddit, Karma isn’t just more visible, but a certain amount is an explicit requirement for posting in many situations. High karma accounts are also less likely to get moderated/banned. You can see why spambots would want to amass it.
I have no idea what the incentive would be here.
Also, this is not that far from Instagram. And even with real women Insta always felt slimy to me.
And… weird. I don’t get the appeal.
This is why I love the idea of Cromite and other “antifingerprinting” efforts, not simply blocking but spoofing and plausibly randomizing as many metrics as they can.
I wish there was some way to distribute that to the masses. Like maybe a crazy hardware zero day, and it’s only used to stealth load anti fingerprinting on as many devices as it can.
What I barely understand is why businesses keep fueling this ad inferno. If it’s mostly bots farming engagement, isn’t that going to limit the effectiveness of advertising? Won’t that eventually show up in their returns on the advertising’? Do they really want their business associated with scantily clad 14 year old feeds, or are they all totally blind to that.
I get part of it… social media is the internet now, for most people. So if you want reach, where else are ya gonna go? Cable? Newspapers? Local news? They killed everything else. Google’s even killing YouTube sponsors now, auto skipping sponsor segments in the app.


IDK what y’all are on about. KDE + Khronkite uses very little RAM. There are a few background things you can disable if you don’t need them to make it even leaner.
It also just works, with so many integrations, all maintained for you.
My brief foray into discrete WMs like Sway was nostop “oh, it doesn’t have a WiFi manager? Oh, no sharing? Oh, no…” and I ended up having to install a bunch of stuff manually, manually configure it all, tie them together with some scripts and services that break with updates, and find out I did a no-so-great job because I haven’t spent literally thousands of man hours in integration and ended up using a lot of extra disk space and RAM anyway!
Breathes.
So yeah. Big DEs are nice. And lean, mostly.
I don’t intend to be abrasive, but this post feels like… bait?
I know it’s not.
But still. OP posted few specifics of what they actually do on their computer, nor what their hardware is, nor specific problems, and is not responding to any comments thus far. But “what distro should I use?” is Lemmy catnip. It’s absolutely guaranteed to get a lot of engagement.
It’s also been asked many, many times. If OP is curious, there are literally thousands of comments to sift through on Lemmy alone.
If this was Reddit, I’d say it’s a bot account farming karma for authenticity. But that doesn’t makes any sense, as there’s no engagement incentive like that here on Lemmy.
So yeah. Apologies for impoliteness, I meant nothing personally, but OP, there are many threads like this, and you’d get much more tailored suggestions with a little more specificity.


I mean… if they’re still on Windows 7, they’ll likely keep using Firefox anyway?


Chip designs take years, so if there’s a sudden glut of HBM, there’s no good way to put it to use outside of existing designs.
That being said, a lot of LPDDRX is being produced for Nvidia servers and a few other systems. That would be useful. Doubly so if we packaged as LPCAMM.


It’s quite doable. If you have pretty much any not-prehistoric GPU, you can run quantized Z-Image turbo. If it’s a not-ancient Nvidia GPU, you can run a 4-bit SVDquant version extremely quickly.
There may even be versions runnable on Intel IGPs or beefy CPUs these days, though I have not personally investigated this.
Or you can just use the Artbot through the AI Horde for a deepfake, with no corporate servers involved. It’s crowdsourced inference, basically the equivalent of the Fediverse.
It’s an ongoing scam for the ultra wealthy and Tech Bro influencer con artists.
That’s not an exaggeration. That’s what it is.
Hence, it will keep going as long as it keeps getting boosted across media.


Just having a digital archive of yourself is kind of cool. Future historians (if there are any) will love that, too.


Do you get scammers much?
I tried looking for camera lenses, logged out and with some scripting, and ran into some pretty clear scams immediately. Like no-one is even policing the site.


Many do have automated checking, testing, rules for the PR maker to follow and such.
If they don’t have it set up, and the project is big, TBH the maintainers should set it up.
The issue is that these submitters are (often) drive-by spammers. They aren’t honest, they don’t care about the project, they just want quick kudos for a GitHub PR on a major project.
Filtering a sea of scammers is a whole different ballgame than guiding earnest, interested contributors. Automated tooling isn’t set up for that because (outside the occasional attempt to sneak malware into code) it wasn’t really a thing.
It’s way more latency than you noticed, but it depends on the genre.
Age of Wonders or Civ, those work alright. Asseto Corsa or some brawler? Ehhh. Maybe playable, but it would hurt unless that’s all you knew.


Godot is also weighing the possibility of moving the project to another platform where there might be less incentive for users to “farm” legitimacy as a software developer with AI-generated code contributions.
Aahhh, I see the issue know.
That’s the incentive to just skirt the rules of whatever their submission policy is.
Thats just poor distro support, kind of like CUDA in the past… ROCM should “just work” if it’s shipped right. But it’s not really a priority with maintainers.
Now, if you’re trying to run CUDA stuff with ROCM, that’s a whole different story. The bast majority of GPU software has extremely poor ROCM support compared to CUDA, and some of this is definitely from AMD footgunning.
because it tends to include a previous version of the driver, which causes install/uninstall havok
To be fair, this is a packaging/distro problem, as CUDA should always work (and be kept in sync with) the newest graphics driver.
ROCM and OpenVINO (AMD and Intel) are even more of a pain, actually.


I don’t even know what they’re using the SSDs for.
Most businesses are too stupid to train their own models from scratch, and won’t use “foreign” ones so they won’t finetune them either.
On the inference side… SSDs aren’t used for much. Just storing Docker stuff/dependencies and model weights for the initial load, and that’s it. Maybe some data for bulk processing, but that’s no different than existing software. The one niche may be KV cache swapping for re-using prompt prefixes, but this is limited and being obsoleted by new attention mechanism.
So WTF do they even need SSDs and HDDs for? Honestly it feels like FOMO purchasing.
The research/tinkerer community overwhelmingly agrees. They were making fun of Tech Bros before chatbots blew up.