Apparently they weren’t always a gambling site in the way that you see today, they switched over to this sometime in the last 2 years: https://www.youtube.com/watch?v=bBUkajbg688&t=119s
Apparently they weren’t always a gambling site in the way that you see today, they switched over to this sometime in the last 2 years: https://www.youtube.com/watch?v=bBUkajbg688&t=119s
They do write that, but it also says that your money is deposited in an FDIC insured account.
And, according to the article, they even gave you a routing number and account from Evolve:
When I signed up, they gave me an Evolve routing and account number."
The mystery of where those funds are hasn’t been solved, despite six months of court-mediated efforts between the four banks involved. That’s mostly because the estate of Andreessen Horowitz-backed Synapse doesn’t have the money to hire an outside firm to perform a full reconciliation of its ledgers, according to Jelena McWilliams, the bankruptcy trustee.
So you’re telling me that a company which manages $42 billion worth of assets doesn’t have the money to hire a firm to track down where all of the money was transferred to? https://en.wikipedia.org/wiki/Andreessen_Horowitz
Ackshully… It should be: “AaaS”.
For me, the article makes it seem like there’s some new announcement that the FBI has put out about a newly discovered vulnerability. Turns out, the announcement is about vulnerabilities we’ve known about for a long time.
I understand, that definitely makes sense then.
This isn’t an article about mistranslations.
This is an article focusing on how asking about US election questions in Spanish will give you answers that are for the wrong country, or just wrong in most cases when compared to asking the same question in English.
One example is that, if someone in Puerto Rico were to ask ChatGPT 4/Claude/Gemini/Llama/Mixtral a US Election question, it would respond with information for Venezuela/Mexico/Spain instead.
Same, I’d say it’s way better than most other transcription tools I’ve used, but it does need to be monitored to catch when it starts going off the rails.
Whisper isn’t a large language model.
It’s a speech to text (STT) model.
Rather than making it illegal to use, people need to use these tools responsibly. If any of these companies are using almost any kind of AI/machine learning they need to include a human in the loop that can verify that it’s working correctly. That way if it starts hallucinating things that were never said, it can be caught and corrected.
I’ve found that Whisper generally does a better job at translating/transcribing audio than other open source tools out there, so it’s not garbage… But it absolutely is a hazard if you’re trying to rely solely on it for official documents (or legal issues).
As far as promotion goes… It’s open source software, it’s not being sold.
As someone who uses Whisper fairly often, it’s obvious that they’ve trained off of a bunch of YouTube videos.
Most of the time it’s very accurate, but there have definitely been a few times in long transcription sessions where it will randomly hallucinate that someone is saying “Don’t forget to like and subscribe!” When nothing was said anywhere near that.
For me, I use Whisper for transcribing/translating audio data. This has helped me to double check claims about a video’s translation (there’s a lot of disinformation going around for topics involving certain countries at war).
Nvidia’s DLSS for gaming.
Different diffusion models for creating quick visual recaps of previous D&D sessions.
Tesseract OCR to quickly copy out text from an image (although I’m currently looking for a better one since this one is a bit older and, while it gets the text mostly right, there’s still a decent amount that it gets wrong).
LLMs for brainstorming or in the place of some stack overflow questions when picking up a new programming language.
I also saw an interesting use case from a redditor:
I had about 80 VHS family home videos that I had converted to digital
I then ran the 1-4 hour videos through WhisperAI Large-v3 transcription and pasted those transcripts into a prompt which had a little bit of background information on my family like where we live and names of everyone who might show up in the videos, and then gave the prompt some examples of how I wanted the file names to look, for example:
1996 Summer - Jane’s birthday party - Joe’s Soccer game - Alaska cruise - Lanikai Beach
And then had Claude write me titles for all the home videos and give me a little word doc to put in each folder which catalogues all the events in each video. It came out so good I have been considering this as a side business
You’re right, whether it’s AI generated or not doesn’t matter.
This is a copyright infringment matter in which “Fair Use” will become a major factor. https://fairuse.stanford.edu/overview/fair-use/four-factors/
In this case, if the courts rule in favor of Alcon there’s a danger that this expands how copyright law is judged and future cases can use that ruling in their favor. It would make it a lot easier for them to only prove that someone wanted an image that “looks like” even when the image wouldn’t normally be held to that level of scrutiny at face value.
You’re right that there are other factors at play here:
The “Hollywood talent pool market generally is less likely to deal with Alcon, or parts of the market may be, if they believe or are confused as to whether, Alcon has an affiliation with Tesla or Musk,” the complaint said.
They are absolutely concerned that Musk is trying to associate his product with Blade Runner and if the case hinges on the association rather than the image in question then I don’t see a problem with that.
But it’s very concerning that the image itself seems to be a major factor in this case, specifically that they are accusing “(WBD) of conspiring with Musk and Tesla to steal the image and infringe Alcon’s copyright”.
So you saying that anything AI generated that is similar to something else will get sued for copyright infringement makes no sense, unless you can already do that for hand drawn images.
Yes, you can already sue someone else for copyright infringment with hand drawn images. What matters for the decision are a number of factors (as listed out on that link to fair use) one of them being how closely your drawing resembles the copyrighted material. Here’s an article about a photographer who successfully sued a painter who plagiarized her work: https://boingboing.net/2024/05/17/photographer-wins-lawsuit-against-alleged-painter-who-plagiarized-her-work.html
@kameecoding@lemmy.world exactly this.
In the U.S. we have what’s known as a legal" precedent". If a court case makes a decision on something, it massively increases the chances that other courts will use that same decision in similar future cases.
The producers think the image was likely generated—“even possibly by Musk himself”—by “asking an AI image generation engine to make ‘an image from the K surveying ruined Las Vegas sequence of Blade Runner 2049,’ or some closely equivalent input direction,” the lawsuit said.
In my opinion, I hope that this lawsuit fails. I know that the movie industry already follows similar practices to what Musk has done. If a studio goes to a certain musician and the price is too high to include their music in the show, they’ll go to a different artist and ask them to create a song that sounds like the song that they originally wanted.
If this lawsuit succeeds it’s going to open the door for them to sue anyone that makes art that’s remotely close to their copyrighted work. All they will need to do is claim that it “might have been created by AI with a prompt specifying our work” without actually having to have any proof beforehand.
According to the complaint,
Elon Musk’s image:
Infringes on the copyright of this image from Blade Runner 2049:
You can hear them, but manufacturers had to add external speakers to electric cars to make them louder.
https://en.wikipedia.org/wiki/Electric_vehicle_warning_sounds
Title is slightly misleading. They don’t care which country the battery plants are coming from, they’re just against EV battery plants being setup nearby in general:
Locals fear environmental degradation, and previously spent years opposing South Korean battery plants.
“We fear that CATL will bring pollution and environmental consequences on our land,” Kozma said.
I think you’re misunderstanding what AGI is.
A robot operating on its own does not mean that it has the achieved the ability to think for itself and reason at (or beyond) human levels.
It doesn’t sound like anyone here was thinking that AGI had been achieved.
I’m really confused by your comment and it seems like you’re assuming everyone knows what you’re talking about already. Could you provide some context?
What about “Free” are they getting wrong? (I’m assuming you’re talking about Mozilla here?).
What Amazon reviews thing? Who was this “shady dude”, what did he do that was so “shady”, and how does that relate to Some Amazon review thing if you’re not even sure that he was behind it to begin with?
What does “Aled it up” mean?