This is not the right place to advertise your investing app. This is gross.
Edit: also a pretty brash, reckless, and crappy use of an AI image generator. If your app is so great, couldn’t you afford to pay an artist?
This is not the right place to advertise your investing app. This is gross.
Edit: also a pretty brash, reckless, and crappy use of an AI image generator. If your app is so great, couldn’t you afford to pay an artist?
Mark Zuckerberg and Nick Clegg are bad people. There is no ethical way to give militaries this kind of tool. They will use it to kill innocent people, while disingenuously touting its ‘ability’ to save lives.
If you still have any kind of Meta account or use any of their products, you are helping to legitimize them and give them more power. I’m tired of “it helps me buy junk in my neighborhood” or “but event invites!” excuses. Nope, they’re bad people, running a bad company, that causes real harm to real people every day. If you care at all about the health of society, you must stop giving them the ammunition they turn around and use against you. Stop. Using. Meta. Products.
I imagine CFPB is at the very top of the hit-list to get fully doge’d. Because fuck consumers, why would they ever need protection from anti-union, discriminatory billionaire oligarchs like Musk?
The only legitimate use I can think of for AI in podcasting might be for realtime translations so people who don’t speak the language of the podcaster can still listen. Even that makes me feel weird, but I think it could be done ethically-ish. Same deal for voice-cloning, I think that would be super-useful for realtime translations, so listeners still kinda hear the host’s voice, even translated. But every other use I can think of is ripe for abuse and won’t result in quality content.
This is a good analogy, and is one big reason I won’t trust any AI until the ‘answers’ are guaranteed and verifiable. I’ve worked with people who needed to have every single thing they worked on double-checked for accuracy/quality, and my takeaway is that it’s usually faster to just do it myself. Doing a properly thorough review of someone else’s work, knowing that they historically produce crap, takes just about as long as doing the work myself from scratch. This has been true in every field I’ve worked in, from academia to tech.
I will not be using any of Apple’s impending AI features, they all seem like a dangerous joke to me.
Exactly. I wish more people had this view of interns. Unpaid ones, at the very least. I worked with a few, and my colleagues would often throw spreadsheets at them and have them do meaningless cleanup work that no one would ever look at. Whenever it was my turn to ‘find work’ for the interns, I would just have them fully shadow me, and do the work I was doing, as I was doing it. Essentially duplicating the work, but with my products being the ones held to final submissions standards. They had some great ideas, which I incorporated into the final versions, and they could see what the role was actually like by doing the work without worrying about messing anything up or bearing any actual responsibility. Interns are supposed to benefit from having the internship. The employer, by accepting the responsibility of having interns, shouldn’t expect to get anything out of it other than the satisfaction of helping someone gain experience. Maybe a future employee, if you treat them well.
Yeah totally, that’s an important distinction. Paid interns are definitely different than unpaid interns, and can legally do essentially the same work as a paid employee.
The way the distinction was explained to me is that an unpaid intern is essentially a student of the company, they are there to learn. They often get university credit for the internship. A paid internship is essentially an entry-level job with the expectation that you might get more on-the-job training than a ‘normal’ employee.
This article doesn’t say if the intern was paid, but it does say the company reported the behavior to the intern’s university, so I’d guess it was unpaid.
I work at a small tech company, by no means big tech. I know it’s common for interns to be treated as employees, but it’s usually in violation of labor law. It’s one of those things that is extremely common, but no less illegal.
The US Department of Labor has a 7 part test to help determine if an intern is classified properly. #6 is particularly relevant to this.
There’s very little detail in the article. I’d be curious to find out exactly what the intern’s responsibilities were, because based on the description in the article it seems like this was a failure of management, not the intern. Interns should never have direct access to production systems. In fact, in most parts of the world (though probably not China, I don’t know) interns are there to learn. They’re not supposed to do work that would otherwise be assigned to a paid employee, because that would make them an employee not an intern. Interns can shadow the paid employee to learn from them on the job, but interns are really not supposed to have any actual responsibilities beyond gaining experience for when they go on the job market.
Blaming the intern seems like a serious shift of responsibility. The fact that the intern was able to do this at all is the fault of management for not supervising their intern.
Think about it this way: remember those upside-down answer keys in the back of your grade school math textbook? Now imagine if those answer keys included just as many incorrect answers as correct ones. How would you know if you were right or wrong without asking your teacher? Until a LLM can guarantee a right answer, and back it up with real citations, it will continue to do more harm than good.
This is awesome, we need more rules like this, and Khan is absolutely nailing it. But I’m worried it won’t stick. I think companies have taken our absentmindedness and laziness for granted, and have made tons of money because of it. I don’t think they’ll give that up without a fight, but hopefully they lose. Unless the Supreme Court gets involved, and then we can all but guarantee they’d rule against these consumer protections.
“Too often, businesses make people jump through endless hoops just to cancel a subscription,” FTC Chair Lina Khan said in a statement. “The FTC’s rule will end these tricks and traps, saving Americans time and money. Nobody should be stuck paying for a service they no longer want.”
It’s such a basic and obvious consumer protection.
I guess I’ve been under a rock, but I hadn’t heard of this company until now. Did they really name themselves Nikola Motor? Were they expecting to be bought out by Tesla or something? This would be like me opening a store called George next to an existing store called Washington. Weird.
This seems like a great way to turn architects into spellcheckers and glorified model trainers, and make buildings incredibly unsafe. This is one of the those use cases that strikes me as wholly irresponsible and dangerous. I understand that a lot of this kind of work is time consuming and difficult, but if you tell me a chatbot helped plan and design a building, I’m not stepping foot inside that building.
Bingo. They should invest in their own company, they have the money. There’s no reason for taxpayers to play any part in this.
As of October 2024 Microsoft has a market cap of $3.109 Trillion. (Source). So uh, fuck that.
Wow, it’s hard to know just how impactful this will be, but it sounds like they’ve got something here.
its batteries which it said avoid using metals such as lithium, cobalt, graphite and copper, providing a cost reduction of up to 40% compared to lithium-ion batteries.
Altech said its batteries are completely fire and explosion proof, have a life span of more than 15 years and operate in all but the most extreme conditions.
That’s huge, especially the fire and explosion proof part.
I’ll be honest, I looked at this with the intention of poking holes, but that was a surprisingly thorough article on researchers doing a year-long study trying to figure out practical uses for AI. I for one am still not convinced there’s a practical or truly ethical use at the moment, but I’m glad to see researchers trying. Their results were decidedly mixed, and I still think all the trade-offs don’t work in our favor at the moment, but this was a surprisingly balanced article with a fair amount of subtly on an issue than needs to be examined critically. They admitted that hallucinations are still a huge wildcard that no one knows how to deal with, which is rare. The headline is dumb, but because of how skeptical and distrustful I am of this massive AI bubble, I’m glad there are still researchers putting in the work to figure this shit out.
Wait, I never used snapchat, so I could be totally off base, but don’t Snapchat messages get automatically deleted? Isn’t that the whole point? Haven’t they already been caught deceiving users into thinking their deleted photos are actually gone? This just seems so gross.
I get what you’re saying, but internal company communications (especially for publicly traded companies) still should be accessible to valid legal inquiries, otherwise there is absolutely no hope for any kind of accountability. Having IMs between end-users be off the record by default seems totally reasonable and good to me, but internal communications should not be deletable at all, let alone manually by executives. The US Government has record retention schedules, through which non-records (water-cooler talk or the digital equivalent) are kept private and real records are identified and preserved. This is the kind of thing that Congress needs to regulate for private companies. Google blatantly and actively deleted conversations they knew would be relevant to the case, that’s unacceptable.