With this model everything about you will be shared with every appliance in the vicinity.
With this model everything about you will be shared with every appliance in the vicinity.
What is really annoying is that there are a lot of really good data modeling applications, they are just in research areas. Generative AI is absolutely a waste of resources, but a ton of money and energy is spent on that instead of on the applications that are actually bearing fruit.
You can set it yourself but it then verifies your address is real using Bing maps and the database is really lacking. If it doesn’t find an entry it won’t let you enter it. I am told this will be moving to Azure maps soon which I hope is better.
Anyway we are leveraging manual network entries tofind phones at our locations using the WAP bssid or, for ethernet, LLDP but the latter isn’t working. I can show LLDP coming in on a pcap but Teams doesn’t see it - another ticket for Microsoft.
I am in the middle of trying to get e911 functional for Teams direct route calls, based of lis data, my Teams can’t correctly determine the state I am in, much less my current address. It took multiple tickets to get our corporate headquarters to show up correctly instead of an address a half-mile away.
I forsee getting a lot of tickets from this feature.
Garuda Linux is a great job to help you get your business in the world marveled and followed
The gate here is really cool, I remember from my optical classes all the different ways to encode bits on a photon over fiber, I am curious which properties are more and less suitable for this application.
If I want to know when I’m going to die, I’ll ask an actuary like we did in the old days.
Because they want to use antiporn laws to restrict books and other media with LGBTQ content.
When the 8 bit quants hit, you could probably lease a 128GB system on runpod.
Yeah, I mean the AI being shoveled at us by techbros. Actual ML stuff is currently and will continue to be useful for all sorts on not-sexy but vital research and production tasks. I do task automation for my job and I use things like transcription models and OCR, my company uses smart sorting using rapid image recognition and other really cool uses for computers to do things that humans are bad at. It’s things like LLMs that just aren’t there - yet. I have seen very early research on AI that is trained to actually understand language and learns by context, it’s years away, but eventually we might see AI that really can do what the current AI companies are claiming.
Back in the 90s in college I took a Technology course, which discussed how technology has historically developed, why some things are adopted and other seemingly good ideas don’t make it.
One of the things that is required for a technology to succeed is public acceptance. That is why AI is doomed.
See, you might have noticed that I never claimed perfect English existed in Germany (or anywhere). You seem to be attempting to win an argument that doesn’t exist.
See, that is what I mean. Nobody speaks “perfect” English, not even native speakers, because languages are not prescriptive. Their function is to communicate ideas and if you have successfully communicated then you have used language “perfectly”.
Only because of their exacting standards. Even when I lived in Germany in the 90s the only time I had trouble understanding someone speaking English was when our realtor was trying to be racist but didn’t know the English words.
deleted by creator
“We are sorry you noticed, we didn’t think anyone would read all that.” -Adobe, probably
This is why I keep my OS installs on different drives.
Just don’t ask for support for your dual boot not detecting Windows. God help you.
I know it’s WindowsCentral but the article has some pretty naive takes. Given the propensity of threat actors to target Windows due to its market share it’s impossible to not see a system that records user activity as a huge treasure trove for both malware and hackers.
It also doesn’t mention that Microsoft claimed that it would be impossible to exfiltrate Recall data and of course researchers found it not only possible but trivial, with the data lacking even basic protections. Assurances that there are mechanisms to prevent Recall from secretly monitoring you mean nothing when prior assurances about safety have been found to be paper thin at best.
Further it ignores that telemetry gathered by Windows has dramatically increased in the last several years with methods to disable it being eliminated or undone by OS updates. Microsoft is hungry for user data and it would be absurdly naive to think that Recall won’t be a tool they use to gain more of it. If not now, then definitely later.
The author does point out that Recall has been weirdly under wraps, avoiding the usual test bed for new feature rollout. Microsoft has been acting shady about the feature and then the feature itself does shady things (like record PII, credit card data, etc.), of course users are going to think the worst. At this point it’s a survival tactic.
Microsoft doesn’t have trust issues because of bad PR or a few missteps. Microsoft has trust issues because they have violated user trust repeatedly for decades. They have done nothing to make users feel like they care at all about keeping Windows secure and safe and they clearly have no regard for user privacy. This only question is whether this backlash will do anything to make Microsoft reconsider the way it treats its users. I predict they will learn all the wrong lessons from this.
They say that, but when Ken Paxton subpoenas them they will say they have no choice. It would be better to use an app that doesn’t store this data server side at all.