• 0 Posts
  • 19 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • It wouldn’t be productive, but I’ve been thinking of texting some things at my mom. She hasn’t been hostler to me since I came out, but also not accepting with misgendering and dead naming.

    I know she voted for him before, and I’m pretty sure she did this time. With the way things are going I will probably never see her again. I want her to know why.


  • Naia@lemmy.blahaj.zonetoMicroblog Memes@lemmy.worldS'koden.
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    4 days ago

    I’m sorry, I’m too busy dealing with my rights being trampled and taken away by these monsters along with other life stressors to care about your moral grandstanding.

    Yes. Fuck them. Fuck anyone who supports them for “economic anxiety” or whatever BS rationalization they use to blind themselves, and fuck anyone who is still giving fascists the benifit of the doubt.

    There is no middle ground, no high road, when these people literally want to kill me and my friends. I am exhausted by the last week along and have long been sick of people treating fascists with kid gloves.

    The idea that people think I’m as bad as they are for standing up for myself while they inflict violence on me. Every step of calling fascists fascists and it’s called “hyperbole” or “hysterical”. Because acknowledgement of what they say they will do, have been doing, and are doing means their lives might be disrupted, that they would need to do something.

    They do. We all do. But most people aren’t. So, I’m focused on protecting me and mine. If you wanted a world where all we do is finger wag at fascist then something should have been done to stop what is happening right now…

    But now, I have no sympathy anymore and grow tired of liberals, and honestly not a small number of the left, saying we should be above that. That ship sailed a long time ago.

    If you are privileged enough to still feel like you are above that, then fucking do something so those of us who are the current targets don’t have to defend ourselves alone.


  • For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.

    They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.

    The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.


  • They might try, but if their goal was to destabilizing western dominance for LLMs making it completely open source was the best way.

    This isn’t like TikTok. They have a server that hosts it, but anyone can take their model and run it and there are going to be a lot of us companies besides the big Ai ones looking at it. Even the big Ai ones will likely try to adapt the stuff they’ve spent to long brute forcing to get improvement.

    The thing is, it’s less about the actual model and more about the method. It does not take anywhere close to as many resources to train models like deepseek compared to what companies in the US have been doing. It means that there is no longer going to be just a small group hording the tech and charging absurd amounts for it.

    Running the model can be no more taxing than playing a modern video game, except the load is not constant.

    The cat is out of the bag. They could theoretically ban the direct models released from the research team, but retrained variants are going to be hard to differentiate from scratch models. And the original model is all over the place and have had people hacking away at it.

    Blocking access to their hosted service right now would just be petty, but I do expect that from the current administration…



  • Deepseek is open source. People have looked it over and modified it a ton. If you are hosting it yourself there is no indication of it being Spyware or whatever.

    If you blindly use someone else’s server you are willingly giving up your days to them. Facebook has sold user data to forging companies, yet people like you have a hate boner for china.

    I have no love lost for either government, especially at the moment when my rights are being threatened and trampled by my own. Between the two, China has no power to effect my life directly.

    Regardless, there probably was some state help behind the development of deepseek, but that isn’t relevant to the discussions of the tech and how western companies have been so stuck in their ways chasing short sighted profits at best or gifting at worst.

    Either way, us companies have been missusing LLMs because they want to replace workers with them. That motivation isn’t going to inspire a ton of innovation.



  • There’s something to be said that bitcoin and other crypto like it have no intrinsic value but can represent value we give and be used as a decentralized form of currency not controlled by one entity. It’s not how it’s used, but there’s an argument for it.

    NFTs were a shitty cash grab because showing you have the token that you “own” a thing, regardless of what it is, only matters if there is some kind of enforcement. It had nothing to do with rights for property and anyone could copy your crappy generated image as many times as they wanted. You can’t do that with bitcoin.


  • Been playing around with local LLMs lately, and even with it’s issues, Deepseek certainly seems to just generally work better than other models I’ve tried. It’s similar hit or miss when not given any context beyond the prompt, but with context it certainly seems to both outperform larger models and organize information better. And watching the r1 model work is impressive.

    Honestly, regardless of what someone might think of China and various issues there, I think this is showing how much the approach to AI in the west has been hamstrung by people looking for a quick buck.

    In the US, it’s a bunch of assholes basically only wanting to replace workers with AI they don’t have to pay, regardless of the work needed. They are shoehorning LLMs into everything even when it doesn’t make sense to. It’s all done strictly as a for-profit enterprise by exploiting user data and they boot-strapped by training on creative works they had no rights to.

    I can only imagine how much of a demoralizing effect that can have on the actual researchers and other people who are capable of developing this technology. It’s not being created to make anyone’s lives better, it’s being created specifically to line the pockets of obscenely wealthy people. Because of this, people passionate about the tech might decide not to go into the field and limit the ability to innovate.

    And then there’s the “want results now” where rather than take the time to find a better way to build and train these models they are just throwing processing power at it. “needs more CUDA” has been the mindset and in the western AI community you are basically laughed at if you can’t or don’t want to use Nvidia for anything neural net related.

    Then you have Deepseek which seems to be developed by a group of passionate researchers who actually want to discover what is possible and more efficient ways to do things. Compounded by sanctions preventing them from using CUDA, restrictions in resources have always been a major cause for a lot of technical innovations. There may be a bit of “own the west” there, sure, but that isn’t opposed to the research.

    LLMs are just another tool for people to use, and I don’t fault a hammer that is used incorrectly or to harm someone else. This tech isn’t going away, but there is certainly a bubble in the west as companies put blind trust in LLMs with no real oversight. There needs to be regulation on how these things are used for profit and what they are trained on from a privacy and ownership perspective.






  • Even using LLMs isn’t an issue, it’s just another tool. I’ve been messing around with local stuff and while you certainly have to use it knowing it’s limitations it can help for certain things, even if just helping parse data or rephrasing things.

    The issue with neural nets is that while it theoretically can do “anything”, it can’t actually do everything.

    And it’s the same with a lot of tools like this. People not understanding the limitations or flaws and corporations wanting to use it to replace workers.

    There’s also the tech bros who feel that creative works can be generated completely by AI because like AI they don’t understand art or storytelling.

    But we also have others who don’t understand what AI is and how broad it is, thinking it’s only LLMs and other neural nets that are just used to produce garbage.




  • Trump started the whole thing because he was unpopular on Tiktok. Republicans jumped on board because young people were politically organizing on the platform and they don’t like that.

    But up to that point, there was no really effort, even as much as they tried to claim “national security”.

    Then when the real information about Palestine was being spread there the democrats jumped on board because they are the same as republicans when if comes to foreign policy.

    That’s what started the actual push that gained momentum. They had no actual evidence about the stuff they claimed. Also, if the claim applied to Tiktok, it applies to all the other social media. But they don’t actually care about privacy. They only care about a platform that they couldn’t control and wasn’t catering to them.

    If they cared about privacy, they would have pushed general privacy legislation and/or regulation and oversight on all social media.

    The reversal was Biden realizing he does not have a good legacy and with Trump, there was a lot more content this time around that was pro-trump (and also tiktok gave him a million dollars). So now he gets to claim he “saved tiktolk” when he was the start of the whole thing.



  • Subverting copy protection had always been a vuage notion because they sell you encrypted content, but they still have to sell you something with the decryption keys as well.

    Now, using the key to remove the encryption falls under “subverting” but if you use the key to play the encrypted media directly, why does it matter what hardware it is happening on?

    When it came to switch emulation you didn’t really circumvent the copy protection, you exported the keys from a switch. The game images are basically dumped as is.

    Yes, you could find the keys elsewhere, but if you dumped your own it wouldn’t really be considered subverting. Especially with the jig you put the switch into a state built into the switch hardware. It’s not even a exploit like jailbreak usually are. The recovery boot mode is an intended service feature.

    The only illegal thing would be getting copies of games and keys from other people.