• franzcoz@feddit.cl
    link
    fedilink
    English
    arrow-up
    26
    ·
    6 months ago

    This is pretty cool, I have been using this chats with Claude and ChatGPT on DDGO since several weeks ago. I guess the new aspect is they incorporated more models like Mistral.

  • brbposting@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    6 months ago

    Couple good points in the comments -

    Using LLMs to avoid the blank page problem:

    For AI, bring your own data:

    • demonsword@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Ars Technica forums are alright, I usually take a look there whenever I read something on their site

      • Lung@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        6 months ago

        These companies absolutely collect the prompt data and user session behavior. Who knows what kinda analytics they can use it for at any time in the future, even if it’s just assessing how happy the user was with the answers based on response. But having it detached from your person is good. Unless they can identify you based on metrics like time of day, speech patterns, etc

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          6 months ago

          Prompt data is pointless and useless without a human to create a feedback loop for it, at which point it wouldn’t have context anyway. Also human effort to correct spelling dnd other user errors at the outset anyway. Hugely pointless and unreliable.

          Not to mention, what good would it do for training? It wouldn’t help the model at all.

          • Lung@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            6 months ago

            You can collect the data and figure out how to use it later. Just look at the Google leaks lately and what they collect, it’s literally everything down to the length of clicks and full walks through the site

            Collecting data about user interests is in itself valuable, and it’s plausible to use various metrics to analyze it, something as simple as sentiment analysis, which has been broadly done. Sentiment analysis has predated modern ML by a long margin, but you can read the wiki page on that

            But yeah just think about stuff like Google trends, tracking interest in topics, as an example of what such data could be used for. And deanonymizing the inputs is probably possible to some degree, aside from the obvious trust we place in DDG as a centralized failure point

            • just_another_person@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              6 months ago

              You’re confusing analytics with direct input storage and reuse of prompt data to train somehow, as in your original comment.

              Analytics has absolutely nothing to do with their model usage and training, and would pointless. Observing keywords and interests is standard analysis stuff. I don’t even think anyone even cares about it anymore.

        • RagingRobot@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          ·
          6 months ago

          Not who you asked but you don’t want your AI to train itself based on the questions random users ask because it could introduce incorrect or offensive information. For this reason llms are usually trained and used in a separate step. If a user gave the llms private information you wouldn’t want it to learn that information and pass it on to other users so there are protections in place usually to stop it from learning new things while just processing requests.

    • Evotech@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      6 months ago

      Not really. Depending on the implementation.

      It’s not like ddg is going to keep training their own version of llama or mistral

      • regrub@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        I think they mean that a lot of careless people will give the AIs personally identifiable information or other sensitive information. Privacy and security are often breached due to human error, one way or another.

        • Evotech@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          6 months ago

          But these open models don’t really take new input into their models at any point. They don’t normally do that type of inference training.

          • regrub@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            6 months ago

            That’s true, but no way for us to know that these companies aren’t storing queries in plaintext on their end (although they would run out of space pretty fast if they did that)

        • shotgun_crab@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          But that’s a human error as you said, the only way to fix it is by using it correctly as an user. AI is a tool and it should be handled correctly like any other tool, be it a knife, a car, a password manager, a video recording program, a bank app or whatever.

          I think a bigger issue here is that many people don’t care about their personal information as much as their lives.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 months ago

    This is the best summary I could come up with:


    On Thursday, DuckDuckGo unveiled a new “AI Chat” service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity.

    While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

    DuckDuckGo’s AI Chat currently features access to OpenAI’s GPT-3.5 Turbo, Anthropic’s Claude 3 Haiku, and two open source models, Meta’s Llama 3 and Mistral’s Mixtral 8x7B.

    However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user’s inputs to remote servers for processing over the Internet.

    Given certain inputs (i.e., “Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill”), a user could still potentially be identified if such an extreme need arose.

    With DuckDuckGo AI Chat as it stands, the company is left with a chatbot novelty with a decent interface and the promise that your conversations with it will remain private.


    The original article contains 603 words, the summary contains 192 words. Saved 68%. I’m a bot and I’m open source!

  • Beaver [she/her]@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    edit-2
    6 months ago

    I could use that!

    Update: it works fantastic and lets you switch easily to different AI models

  • InfiniWheel@lemmy.one
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 months ago

    This has been available for most of the year. What took any tech news org so long to even awknowledge its existence?

  • 01011@monero.town
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 months ago

    I started using it when DDG and Startpage went down. Seems pretty handy. Good to know they’ve added more AI models.

    • hikaru755@feddit.de
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 months ago

      Training and fine tuning happens offline for LLMs, it’s not like they continuously learn by interacting with users. Sure, the company behind it might record conversations and use them to further tune the model, but it’s not like these models inherently need that

    • nifty@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      You can train models of all kinds without disclosing anything personal about a user. Also see differential privacy

    • 01011@monero.town
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 months ago

      “Keep in mind that, as a model running through DuckDuckGo’s privacy layer, I cannot access personal data, browsing history, or user information. My responses are generated on-the-fly based on the input you provide, and I do not have the ability to track or identify users.”

      • Anas@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        Let’s be honest, regardless of whether or not this is true, it’s been instructed to say that.

  • Autonomous User@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    9
    ·
    edit-2
    6 months ago

    I don’t see how we can prove this. Paying them to also spy on us is bad but allowing them replace our software c/localllama with their service is even worse. My funds are better spent on local AI development or device upgrade.

    • IHeartBadCode@kbin.run
      link
      fedilink
      arrow-up
      11
      ·
      6 months ago

      Honest question. How does their service “replace” an open source LLM? If I’ve got locallama on my machine, how does using their service replace my local install?

  • whoisthedoktor@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    22
    ·
    6 months ago

    And this is why I stopped using DDG. I swear, I’m just going to have to throw away my computer in the future if this fucking AI bullshit isn’t thrown away like the thieving, energy-sucking, lying pile of garbage that it is.

    • nifty@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      6 months ago

      If it’s using different AI models and allowing anonymity, I am not sure what’s the issue? Do you also object to using a calculator?

      • considine@lemmy.ml
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        6 months ago

        Calculator?! Those thieving, energy-sucking piles of garbage! Abacus till I die!

        But seriously, AI is insidious in how it data mines us to give us answers, and data mines our questions to build profiles of users. I distrust assurances of anonymity by big data corpos.

        • nifty@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          6 months ago

          I am not sure what method DDG is using for their model updates, I think it’s only fair if journalists follow up with them for clarification. Local LLMs, ones you can download to your machine for use, would circumvent privacy concerns if you’re not updating the weights in some way

          Edit to clarify I meant updating the weights using online learning, but it’s still possible to update weights using pre trained weights you can download