Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 1 Post
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle

  • The article sucks. The FTC isn’t going after Microsoft’s cloud services because they’re good/bad. They’re going after Microsoft because of forced bundling. Same abuse of monopoly power they were found guilty of when they started forcing everyone to use Internet Explorer.

    Microsoft is forcing customers to use their cloud services under all sorts of scenarios. Many of which have no logical reason other than to force customers into Azure.

    For example, if you have a lot of Windows servers in Azure they will stop supporting you once you reach a certain threshold unless you also sign up to use their enterprise cloud AD service.

    They already do this with regular Windows–you have to use AD if you’re a business customer and you go past a certain threshold of systems–but in that case you can just get some Domain Controllers and call it a day. You can put them wherever you want (locally, in AWS, in Azure, wherever).

    With Azure Windows servers though you’re forced to use Azure AD (or you lose support and possibly access to other bundled services). You can’t host Domain Controllers anywhere else. I mean, they’ll let you have as many off-Azure DCs as you want but they must still be joined/synchronizing to Azure AD.

    There’s probably many other anticompetitive tactics in place within the world of Azure but that’s the one big one I know off the top of my head.










  • Except there’s nothing illegal about scraping all the content from websites (including news sites) and putting it into your own personal database. That is–after all–how search engines work.

    It’s only illegal if you then distribute said copyrighted material without the copyright owner’s permission. Because that’s what copyright is all about: Distribution.

    The news sites distributing the content in this case freely gave it to OpenAI’s crawlers. It’s not like they broke into these organizations in order to copy their databases of news articles.

    For the news sites to have a case they need to demonstrate that OpenAI is creating a “derivative work” using their copyrighted material. However, that’s going to be a tough sell to judges and/or juries since the way LLMs work is not so different from how humans do: They take in information and then produce similar information (by predicting the next word/symbol, given a series of tokens/a prompt).

    If you read all of Stephen King’s books, for example, you might be better at writing horror stories. You may even start writing in a similar style! That doesn’t mean you’re violating his copyright by producing similar stories.



  • Riskable@programming.devtoTechnology@lemmy.worldThe Cult of Microsoft
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    1
    ·
    22 days ago

    Ahaha! Microsoft employees are using AI to write hallucinate their own performance reviews and managers are using that very same AI to “review” said performance reviews. Which is exactly the dystopian vision of the future that OpenAI sells!

    What’s funny is that the “cult of Microsoft” is 100% bullshit so the AI is being trained in bullshit and as time goes on its being reinforced with it’s own hallucinated bullshit because everyone is using it to bullshit the bullshitters in management who are demanding this bullshit!






  • As another (local) AI enthusiast I think the point where AI goes from “great” to “just hype” is when it’s expected to generate the correct response, image, etc on the first try.

    For example, telling an AI to generate a dozen images from a prompt then picking a good one or re-working the prompt a few times to get what you want. That works fantastically well 90% of the time (assuming you’re generating something it has been trained on).

    Expecting AI to respond with the correct answer when given a query > 50% of the time or expecting it not to get it dangerously wrong? Hype. 100% hype.

    It’ll be a number of years before AI is trustworthy enough not to hallucinate bullshit or generate the exact image you want on the first try.