

Adding on to it, the structure/shape of our ears are also unique. So if anyone loses their ear and get an implant, it takes them some time before they can fully get accustomed to it
Adding on to it, the structure/shape of our ears are also unique. So if anyone loses their ear and get an implant, it takes them some time before they can fully get accustomed to it
Guys listen up, every deepseek model comes with a dedicated chinese spy, who will log all your data and send it back to CCP who will use it to plot the destruction of the western civilization.
Instead we should use Freedom© models from OpenAI (side note if deepseek is so “open” how come they don’t have open in their name huh?) even if OpenAI don’t show their reasoning, they only do this cuz they want to protect us and they stand for our values.
They cost 100x more only because they are fighting for our Freedom and Freedom doesn’t come cheap, Freedom doesn’t have a price, Freedom requires our sacrifice.
OpenAI and the like aren’t going to get into trouble anytime soon. They already provide their latest tech to US gov and military. OpenAI is like a goose that laid a golden egg, they need to fuck up really really badly to face any consequences.
The US companies already scraped the data while they could. If anything, data scraping is far far more difficult now for everyone due to technical reasons.
Most of the new models are trained on synthetic data or higher quality of data or with RLHF. The reason deepseek is able to perform is likely because LLMs are very very new things, there are many low hanging fruits. Its no longer just about the data we already hit that limit for quite some time.
These are very loose terms. Pretty much every major website saves IP addresses when you create an account (to prevent abuse/spam detection). And you can get location info from the IP address. Hence the first condition would be true for all of those websites.
Next, any website/app that builds a recommendation system will save user interactions to build the “algorithm”. So every social media with an algorithm will fall into this category.
With enough bending of terminology, we might be able to prove that the lemmy also collects user data (although it will be really hard cuz the algo here is based on upvotes and time posted iirc). And “large amount” part is just legal filler words.
I didn’t say tiktok is the bastion of free speech. They only do this in the Palestinian case because it does not serve them anything to be against palestine. We can criticise one party without making the other one some kind of “moral hero” of a story.
Tiktok got banned not for peddling “chinese propaganda” but instead not peddling the US one.
All the major tech companies in the US take measures to ensure content deemed unworthy by the government never become mainstream or viral.
This is done under the pretense of stopping “hate speech” or “terroristic propaganda” but often include things like pro-palestinian content or class struggle content (like luigi mangione stuff).
Tiktok was bold enough to not do that by default, cuz they wanted someone to ask them to do this and then it would become a huge scandal about how the US suppresses free speech. And US gov don’t want to do that for this exact reason as well. So they decided to ban it.
Remember talks for this “law” were initiated when all of a sudden tiktok became a host for pro-palestinian voices. We should ask ourselves, how is it that 60% of americans want the government to stop arms sales to israel but this 60% never shows up on the big social media platforms. But on other platforms like here in lemmy and tiktok, pro-palestinians is the majority.
For further reading, listen to employees fired from big US tech companies for voicing their concerns over the palestine issue, or read Meta’s new terms and conditions specially the section on “dangerous organizations and individuals”.
There is also cryptpad i think. Might be what you are looking for
I mean AfD is also pro-russia AND have neo-nazi’s in its ranks.
They are at 20% in polls just before the german election.
I don’t think many people have as many issues as they should have with russia or they just don’t care.
You don’t know. Maybe they have a cat named “not wrong” and its being addressed here
What you are talking about is machine learning which is called AI. What the post is talking about is LLMs which are also called AI.
AI by definition means anything that exhibits intelligent behavior and it is not natural in nature.
So when you use GMaps to find the shortest path between 2 points that’s also AI (specifically called local search).
It is pointless to argue/discuss AI if nobody even know which type they are specifically talking about.
We all are disagreeing on the naming not the functionality. It used to be the case that names in tech were descriptive, just by reading it out loud you can understand the tech (e,g. SQL, OOP etc.).
“Serverless” is a marketing term. A better name would be “server agnostic deployment” or many many other ways to describe it.
The fact is, this name was created by the people selling it not the people using it. And i am sure the idea is not new, but the serverless name tries to make it seem like a comparatively recent thing so people would buy it more.
This is what happens when a technical field gets infiltrated by business bros. Remember how openai was talking about AGI helping humanity or smth? Their definition of AGI was leaked recently, its “making $100 billion profit”.
That’s it, thats what will help humanity achieve its true potential, by openai making $100b in profits.
Why is everyone enjoying a model that is open and very cheap instead of paying an exorbitant amount to US based “open” AI company? And that’s even after the freedom loving US government put sanctions on exporting hardwares?
Do people really hate “open” and “free” AI so much?
Meta has also released many top tier model to the open source community. To say “meta only oppose openai cuz they wanna create a service of their own” is quite frankely uninformed.
Meta is the reason so many researchers are able to work and make AI accessible to the everyday people. Without llama models, so much of research would not have been possible cuz openai never release their stuff under the guise of “safety”.
Openai wants to monopolize and charge us whatever they want. And this going for-profit was part of their plan from the beginning. If only meta had not released their top tier models for absolutely free, openai would have had complete monopoly.
Also saying for profit structure would allow them to have more fund is like saying having a gun will allow a robber to have more funds.
The funds will come from consumers, for profit would mean they will have an easier time ripping off the people without too much scrutiny
Altman is a business bro, thats very on point for him. I just don’t know why people even listen to his shit
There’s even hard proof that when he was telling the public they would release their research for public good and in the meeting with other stakeholders he was saying completely opposite.
He lies to literally everyone but still people regard him high
I know it’s just a meme but they both solve different enough problems. A self-driving car can easily turn back into a non-self driving car, meaning you can self drive for long transit and switch to a normal one in the city or hectic areas. This basically solves the issue of self drive tech not being smart/reliable enough. Which, as you probably also agree, is still quite far from perfect.
I really thought by making intentional mistakes in my comment people would be able to see the OBVIOUS sarcasm, but I guess not…
This is a by-product of modern society (maybe late stage capitalism). We need to be sold a “solution” to a problem. Reducing consumption is not something that can easily be sold hence these carbon capture, recycling plastic “solutions”.
Unless someone can make money off of it, reducing emissions is going to be difficult.
Imagine if someone writes a malicious extension and now with this, they will also have access to entire chat history.