• 0 Posts
  • 203 Comments
Joined 2 years ago
cake
Cake day: July 13th, 2023

help-circle



  • That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying “I do not have suffient training on this subject or reliable sources for it to give you a confident answer”. It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn’t just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place.

    AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I’ve never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn’t, and code is always tested before releasing it anyway.

    In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn’t “know” anything at all. It isn’t sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it.

    Put simply, it is not a reliable source of information… ever. Make sure you understand that.


  • The “funny” thing is, that’s probably not even at Elon’s request. I doubt that he is self-aware enough to know that he is a narcissist that only wants Grok to be his parrot. He thinks he is always right and wants Grok to be “always right” like him, but he would have to acknowledge some deep-seeded flaws in himself to consciously realize that all he wants is for Grok to be the wall his voice echos off of, and everything I’ve seen about the man indicates that he is simply not capable of that kind of self-reflection. The X engineers that have been dealing with the constant meddling of this egotistical man-child, however, surely have his measure pretty thoroughly and knew exactly what Elon ultimately wants is more Elon and would cynically create a Robo-Elon doppelganger to shut him the fuck up about it.









  • My wife’s family owns a fireworks store. We demo fireworks every year and record them on my phone. We post them on the store’s YouTube channel. We have people that watch and comment on them all the time as soon as we post them. We also have QR codes on our price tags linking to the videos so people can scan them and watch while shopping and it is an extremely effective tool for sales. But we might be the exception.





  • Linux developers can’t name their products any better than they name their variables.

    “Programming done, time to publish, now it just needs a name…” briefly pauses, then smashes face into keyboard… “There! … ehh, no, still missing something.” clicks random spot, types X… “Perfect! Send it!”


  • No it’s a tool, created and used by people. You’re not treating the tool like a person. Tools are obviously not subject to laws, can’t break laws, etc… Their usage is subject to laws. If you use a tool to intentionally, knowingly, or negligently do things that would be illegal for you to do without the tool, then that’s still illegal. Same for accepting money to give others the privilege of doing those illegal things with your tool without any attempt at moderating said things that you know is happening. You can argue that maybe the law should be more strict with AI usage than with a human if you have a good legal justification for it, but there’s really no way to justify being less strict.


  • It’s pretty simple as I see it. You treat AI like a person. A person needs to go through legal channels to consume material, so piracy for AI training is as illegal as it would be for personal consumption. Consuming legally possessed copywritten material for “inspiration” or “study” is also fine for a person, so it is fine for AI training as well. Commercializing derivative works that infringes on copyright is illegal for a person, so it should be illegal for an AI as well. All produced materials, even those inspired by another piece of media, are permissible if not monetized, otherwise they need to be suitably transformative. That line can be hard to draw even when AI is not involved, but that is the legal standard for people, so it should be for AI as well. If I browse through Deviant Art and learn to draw similarly my favorite artists from their publically viewable works, and make a legally distinct cartoon mouse by hand in a style that is similar to someone else’s and then I sell prints of that work, that is legal. The same should be the case for AI.

    But! Scrutiny for AI should be much stricter given the inherent lack of true transformative creativity. And any AI that has used pirated materials should be penalized either by massive fines or by wiping their training and starting over with legally licensed or purchased or otherwise public domain materials only.