• cm0002@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    5 months ago

    It’s verifiable, you can observe the connections it makes.

    Admittedly, you can’t see the contents of the packets themselves, but you can tell easily anyways if it’s doing anything close to sending a constant stream of audio

    • Ekky@sopuli.xyz
      cake
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      10
      ·
      5 months ago

      Assuming that they parse everything locally, which appears to be the case, then why would it have to send a constant stream of audio? A small list/packet of keywords of a few bytes or KB once a day would suffice for most telemetry (including ad analysis and other possible spying reasons’) needs.

      Also, one ought to be able to see the contents of the packets if they retrieve the devices’ SSL key for the session, so this should also be falsifiable.

      • cm0002@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 months ago

        Most of the Google Home speakers do not have the processing capacity for true local processing.

        Local processing in the context of a smart home speaker is searching for a certain trigger keyword and nothing else, this doesn’t require much oomf locally.

        A system that you describe is totally possible, but not with the hardware you find in the average smart speaker, thus a constant stream of audio needs to be sent off to the cloud somewhere.

        Also, yea it’s not impossible to drop in on an SSL connection, but the embedded nature of the speakers makes it a bit more difficult.

        • Ekky@sopuli.xyz
          cake
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          5 months ago

          Thank you for the explanation, though the underlying requirements for keeping a list locally appear to remain much the same, since you really only need to add a few trigger words to the “dumb, always-on” local parser (such as your top 1000 advertisers’ company or product names). After all, I’d imagine we do not require context, but only really need to know whether a word was said or not, not unlike listening for the “real” trigger word.

          This is of course only one of many ways to attack such a problem, and I do not know how they ultimately would do, assuming that they were interested in listening in on their users in the first place.

          And yes, embedded devices are slightly harder to fiddle with than using your own computer, but I’d bet that they didn’t actually take the time to make a proper gate array and instead just use some barebones Linux, which most likely means UART access!