Go for it. I’m not sure which forum it belongs in.
Go for it. I’m not sure which forum it belongs in.
They know exactly what they’re doing. Their editors have decided that American democracy is a “partisan issue” and have decided to cover it as such, meaning they won’t “take sides” over it.
Musk raised $6 billion in a recent funding round for his would-be OpenAI competitor, xAI, whose first product, Grok, is meant to serve as a politically incorrect answer to ChatGPT. In addition to Tesla, SpaceX, and xAI, Musk is founder of brain interface startup Neuralink and tunneling venture Boring Company.
Seems pretty straightforward to me.
Musk raised $6 billion in a recent funding round for his would-be OpenAI competitor, xAI, whose first product, Grok, is meant to serve as a politically incorrect answer to ChatGPT. In addition to Tesla, SpaceX, and xAI, Musk is founder of brain interface startup Neuralink and tunneling venture Boring Company.
In case anybody is wondering why he’s making a big deal out of it.
As to why emoji feels the need to make his own “anti-woke” AI, it’s because he thinks that, at some point in the future, our AI overlords will decide to cull white people to meet “forced diversity quotas.” I’m not kidding.
I bet their lawyers might not think it’s a great idea.
The Internet immediately worked, which is one big difference. The dot com financial bubble has nothing to do with the functionality of the internet.
In this case, there is both a financial bubble, and a “product” that doesn’t really work, and which they can’t make any better (as he admits in this article.)
It was obvious from day 1 how useful the Internet would be. Email alone was revolutionary. We are still trying to figure out what the real uses for LLM are. There appear to be some valid use cases outside of creating spam and plagiarizing other people’s work, but it doesn’t appear to be any kind of revolutionary technology.
I’m saying that you can’t use scotch guard or anything like that.
It’s been a while, but I don’t believe that they were allowed to use cardboard or anything of the sort to prop up or modify the appearance of the product. Instead, they would cook say 100 burger patties, go through dozens of heads of lettuce, slice 100 tomatoes, etc, and pick out the perfect pieces to make a burger that looks the way that they want.
The most that they could adulterate the food was to make a slurry with corn starch, water, and food dye that could be applied with a paint brush to make things look juicy, etc. They would use a clothes steamer to make a pizza look just right. Lots of tricks, but it had to be something that you could just pick up and eat, even if you wouldn’t necessarily want to.
I dated a woman that worked in TV ad production. Everything has to be real food.
It seems clear to me that he hates the people that are ruining the tech industry, ripping off customers, and pumping out shitty projects for short term stuck pumps, and he takes every opportunity to shit on those people and point out their idiosyncrasies. That’s pretty much every tech CEO these days.
It’s also pretty clear to me that he believes in the promise of the industry, and thinks that workers deserve better than the people that they work for.
They were using machine learning to try and figure out what people were buying. Machine learning has lots of errors
until you train it.
Machine Learning, no matter how well trained or advanced, is just doing a make-em-up.
Besides that, in this case the experiment has been going on for years and humans were still doing like 70% of the work. It was a failure, that’s why Amazon shut it down
A term created in order to vacuum up VC funding for spurious use cases.
Automatic spam generator.
I mean, the ability to churn out maybe amounts of these fake photos with no effort on the part of the user, causing them to pollute real Internet searches (also now “augmented” by MLB themselves) is definitely AI specific.
Also, colorizing photos is not the same thing as making fake ones.
Lol. It doesn’t do video generation. It just takes existing video and makes it look weird. Image generation is about the same: they just take existing works and smash them together, often in an incoherent way. Half the text generation shit is just fine by underpaid people in Kenya Ave and similar places.
There are a few areas where llm could be useful, things like trawling large data sets, etc, but every bit of the stuff that is being hyped as “AI” is just spam generators.
Well, yeah, I would agree with that. The author also does a good job of pointing out how that’s bullshit, and gives several examples of ways that the Times is covering the candidates differently, demonstrating their hypocrisy. (They’ve been laundering right wing ideas into mainstream public consciousness for decades now, anyway.)
It is still shocking to see the editor just come out and say, in plain English, that the very concept of democracy is a partisan issue, and that they refuse to weigh in on it.