• 0 Posts
  • 79 Comments
Joined 2 years ago
cake
Cake day: December 7th, 2023

help-circle
  • Hard to think of one on the spot, but I have an unintentional one/mistake.

    When I was a kid, my mother had a digital camera that broke. It had a mechanical lens (or I suppose “lens housing” that would extend when powering on, then retract when powering off. I guess somehow the lens got stuck in between states, and so the camera would refuse to fully boot up. A bit after that happened, she got a new digital camera.

    Me being the tinkerer I was, I asked if I could mess around with the old camera and was basically given it since it was useless (or so she thought). While messing with it, I accidentally dropped it - it somehow fell at just the perfect angle and “knocked” the lens back into place (without breaking anything). Camera worked perfectly fine after that!

    Unfortunately while I was still allowed to keep it, that never really “kick started” a passion for photography in me. As far as I recall I got bored of it pretty quickly.


  • Yes you are, they are advertising their platforms like you are free to comment anything and most people beleave that.

    I hate to break it to you, that’s your fault for making an assumption (and a bold one at that) or you’re just quite naive. Most places that you sign up for will either have you agree to a Terms of Service, or they’ll make you agree to the rules. I have even more bad news for you: Advertisements usually try their best to show only the “good” of what is being advertised (such as how an advertisement for a toy doesn’t usually make it very clear that batteries are required to use it).

    Ask anyone if they think youtube will delete their commen even if they didnt offend anyone and they will tell you no way!

    No, they might be angry that their comment was removed, but it’s a pretty common understanding that moderators will remove content at their discretion, even if people don’t necessarily agree with the decision.

    I’m not sure why I’m even engaging in this, usually it’s pretty clear when someone gets upset that their “free speech” (that they were never entitled to) is being violated that their intent is to spread hateful content.

    Perhaps that isn’t you, but nonetheless that is the group you’re putting yourself in (even if unintentionally) whenever you ride under that banner.

    It would also be worthwhile double checking what actual “Freedom of Speech” is and what it covers. Assuming you are referring to the US’ first amendment, it has absolutely nothing to do with anyone other than you and the government (and even then it has its bounds).

    As an example, let’s say you’re a writer for a newspaper. The government cannot take down an article that you write in which you criticize them (because that would fall under protected speech, unless you are making direct threats towards someone), but your boss could absolutely say “No way, we’re not publishing that” as they are not a government official.

    This doesn’t even just include “Freedom of Speech”, as another example, with the right to assembly you can publicly assemble and protest the government - but it wouldn’t allow you to start a protest on someone’s private property.






  • I always assumed it was more or less targeting the federation of issues/MRs.

    The git side of things is already distributed as you said, but if you decide to host your random project on your own GitLab instance you’ll miss out on people submitting issues/MRs because they won’t want to sign up for an account on your random instance (or sign in with another IdP).

    This is where a lot of the reliance of GitHub comes from, in my opinion.



  • Your son and daughter will continue to learn new things as they grow up, a LLM cannot learn new things on its own. Sure, they can repeat things back to you that are within the context window (and even then, a context window isn’t really inherent to a LLM - its just a window of prior information being fed back to them with each request/response, or “turn” as I believe is the term) and what is in the context window can even influence their responses. But in order for a LLM to “learn” something, it needs to be retrained with that information included in the dataset.

    Whereas if your kids were to say, touch a sharp object that caused them even slight discomfort, they would eventually learn to stop doing that because they’ll know what the outcome is after repetition. You could argue that this looks similar to the training process of a LLM, but the difference is that a LLM cannot do this on its own (and I would not consider wiring up a LLM via an MCP to a script that can trigger a re-train + reload to be it doing it on its own volition). At least, not in our current day. If anything, I think this is more of a “smoking gun” than the argument of “LLMs are just guessing the next best letter/word in a given sequence”.

    Don’t get me wrong, I’m not someone who completely hates LLMs / “modern day AI” (though I do hate a lot of the ways it is used, and agree with a lot of the moral problems behind it), I find the tech to be intriguing but it’s a (“very fancy”) simulation. It is designed to imitate sentience and other human-like behavior. That, along with human nature’s tendency to anthropomorphize things around us (which is really the biggest part of this IMO), is why it tends to be very convincing at times.

    That is my take on it, at least. I’m not a psychologist/psychiatrist or philosopher.





  • I can’t say that I’ve heard of them, no. I don’t have any need (or desire) to do any sort of identity verification within any of my own personal projects (and I have not been involved with anything of the sorts at my workplace). Because of this, I don’t have any insight or thoughts I can provide on them unfortunately.

    In the context of Fediverse administration (or any service that you run yourself), even with a service that “handles it for you” I still personally wouldn’t want to step into any of it.



  • As long as it is done properly and honest, I have nothing against a “Pro” and a “Contra” article.

    Neither do I, personally. Though I am certainly less than inclined enjoy an article where the author is oddly preachy/“holier-than-thou”, sayings things such as you’re not a “real” programmer unless you sacrifice your health debugging segfaults at 3AM or have done the handmade hero challenge (certainly an interesting series to watch, but one that I have zero interest in replicating). Yet the author accuses copilot of having a superiority complex. I cannot say for sure, however I would assume if the article was in favor of AI rather than against, then there would definitely be comments about exactly this.

    The overarching tone of the article seems like if it were written as a direct comment toward a user instead, it would run afoul of beehaw’s (and surely other instances’) rules, or at the least come really close to skirting the line - and I don’t mean the parts where the author is speaking of/to copilot.


  • No, because since it’s only a third party app implementation, tags wouldn’t follow if I go from my phone to my desktop or any other device. It also just seems kinda… Strange?

    Do you keep a journal of those you meet in-person? No judgement if you do, but if your reaction to that question was “Eww, no!” but also do user tagging I would be very curious as to what the difference is for you.

    Anyways, for problematic people they either get blocked or banned (the egregious ones) which by nature of it being a first-party feature is already synced.


  • According to another user in here, blocking on Mastodon actually works. So seems like it is possible to do in the Fediverse.

    I was not aware of this, but their implementation of how they do this does bring up the limitation I mentioned. The other user cannot see your posts only if you are on the same server:

    If you and the blocked user are on the same server, the blocked user will not be able to view your posts on your profile while logged in.

    I actually thought blocks were public already.

    They’re not, well - the operator of your instance could go into the database and view it that way in the same way that they can see your email address. But aside from someone who has database access to your instance, blocks are not public. What is public is the list of defederated (“blocked” so to speak) instances for an entire instance (this can be viewed by going to /instances of any instance), which might be what you were thinking of?

    And personally I don’t see how it would be an issue if people that I haven’t blocked can see who I’ve blocked.

    How exactly would you enforce that, though? If your blocks were public, all the person who you’ve blocked would need to do is open a private browsing window and look at your profile to see that they’ve been blocked.

    If we’re looking at blocks as being a safety feature, I would think that having your blocks broadcasted to every single instance would be classified as harmful and a breach of your privacy. This is why although an instance that you register with has to have your email address that you signed up with, they don’t broadcast it to all other instances (same with the encrypted value of your password) - because otherwise it would effectively be public.

    Perhaps I’ve just got the wrong stance, but considering that you can never block someone from viewing your content with an absolute guarantee (if the blocks were broadcasted, you still couldn’t prevent someone from just simply logging out, or standing up their own instance and collecting the data anyways) I would not consider that tradeoff to be worthwhile. Not that my stance has any weight since I’m not a maintainer for Lemmy (or any of the Fediverse software), but I wouldn’t be surprised if that has at least come up to those who are developing the various Fediverse software.


  • Aside from the rest of the discussion that has already occurred here, I’m not actually sure how this would work from a technical perspective.

    You and I are on two completely different instances, if I were to block you, how is your instance supposed to know this in order to stop you from reading my comment?

    The only way I could see that working is if the list of users you blocked were federated too, and effectively made public (like votes currently are) - which seems counterproductive to the problem at hand.

    Then what happens if you post in a community where someone you’ve blocked is a moderator? Or if you block the admin of another instance? If you can “cloak” yourself from being moderated by just blocking them, that seems like an exploit waiting to happen. As far as I’m aware, on Reddit blocking a user doesn’t hide your comments from them - but they can no longer reply to them, and I assume this is why that is the case. Unless that has very recently changed.

    The biggest difference between Lemmy (and all software within the Fediverse - for example, I’m pretty sure Mastodon is this way as well), is that there is not one singular authoritative server. Actions like this need to be handled on all instances, and that’s impossible to guarantee. A bad actor running an instance could just rip out the function that handles this, and then it’s moot. I mean, they wouldn’t even need to do that - they’d have the data anyways.

    You could enforce it if both users are on the same instance I suppose, but this just seems like it would only land with the blocking feature being even more inconsistent.


  • How is that the case? I’ve got pretty much zero experience with decompiling software, but I can’t say I’ve ever heard anyone who does say that before. I genuinely can’t imagine that it’s easier to work with say, decompiling a game to make changes to it rather than just having the source available for it.

    I suppose unless the context is just regarding running software then of course it’s easier to just run a binary that’s already a binary - but then I’m not sure I see where decompiling comes into relevance.


  • I don’t see how that’s going to work out well. That’s asking to end up with a mess that you’re just going to have to rewrite anyways.

    I do not even have a complete hatred for AI like a lot of folks do, but I don’t trust it that much (nor should anyone).

    You’d be better off with an actual deterministic transpiler for that (think TypeScript -> JS but the other way around I suppose), not something with a ton of random variables like an AI.