• 0 Posts
  • 162 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle





  • AI learning isn’t the issue, its not something we will be able to put a lid on either way.

    So… there is no Artificial Intelligence. The AI cannot hurt you. It is just a (buggy) statistical language parsing system. It does not think, it does not plan, it does not have goals, it does not understand, and it doesn’t even really “learn” in a meaningful sense.

    Either it destroys or saves the world.

    If we’re talking about machine learning systems based on multi-dimensionl statistical analyses, then it will do neither. Both extremes are sensationalism and arguments based on the idea that either such outcome will come from the current boom of ML technology is utter nonsense designed to drive engagement.

    It doesn’t need to learn much to do so besides evolving actual self-agency and sovereign thought.

    Oh, is that all?

    No one on the planet has any idea how to replicate the functionality of consciousness. Sam Altman would very much like you to believe that his company is close to achieving this so that VCs will see the public interest and throw more money at him. Sam Altman is a snake oil salesman.

    What is a huge issue is the secretive non-consentual mining of peoples identity and expressions.

    And then acting all normal about It.

    This is absolutely true and correct and the collection and aggregation of data on human behavior should be scaring the shit out of everyone. The potential for authoritarian abuses of such data collection and tracking is disturbing.



  • I always recommend buying enterprise grade hardware for this type of thing, for two reasons:

    1. Consumer-grade hardware is just that - it’s not built for long-term, constant workloads (that is, server workloads). It’s not built for redundancy. The Dell PowerEdge has hotswappable drive bays, a hardware RAID controller, dual CPU sockets, 8 RAM slots, dual built-in NICs, the iDrac interface, and redundant hot-swappable PSUs. It’s designed to be on all the time, reliably, and can be remotely managed.

    2. For a lot of people who are interested in this, a homelab is a path into a technology career. Working with enterprise hardware is better experience.

    Consumer CPUs won’t perform server tasks like server CPUs. If you want to run a server, you want hardware that’s built for server workloads - stability, reliability, redundancy.

    So I guess yes, it is like buying an old truck? Because you want to do work, not go fast.


  • Hmm, I don’t have direct experience with ThinkServers, but what I see on eBay looks like standard ATX hardware… which is not really what you want in a server.

    The Dell motherboard has dual CPU sockets and 8 RAM slots. The PSUs are not the common ATX desktop format because there are 2 of them and they are hot swappable. This is basically a rack server repacked into a desktop tower case, not an ATX desktop with a server CPU socket.









  • NaibofTabr@infosec.pubtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    23 days ago

    I see, so your argument is that because the training data is not stored in the model in its original form, it doesn’t count as a copy, and therefore it doesn’t constitute intellectual property theft. I had never really understood what the justification for this point of view was, so thanks for that, it’s a bit clearer now. It’s still wrong, but at least it makes some kind of sense.

    If the model “has no memory of training data images”, then what effect is it that the images have on the model? Why is the training data necessary, what is its function?


  • the presentation and materials viewed by 404 Media include leadership saying AI Hub can be used for “clinical or clinical adjacent” tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients’ personally identifying and protected health information. The demonstration also showed potential capabilities that included “detect pancreas cancer,” and “parse HL7,” a health data standard used to share electronic health records.

    Because as everyone knows, LLMs do a great job of getting specific details correct and always produce factually accurate output. I’m sure this will have no long term consequences and benefit all the patients greatly.


  • NaibofTabr@infosec.pubtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    24 days ago

    We’re not talking about a “style”, we’re talking about producing finished work. The image generation models aren’t style guides, they output final images which are produced from the ingestion of other images as training data. The source material might be actual art (or not) but it is generally the product of a real person (because ML ingesting its own products is very much a garbage-in garbage-out system) who is typically not compensated for their work. So again, these generative ML models are ripoff systems, and nothing more. And no, typing in a prompt doesn’t count as innovation or creativity.