• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle

  • I think his intense commitment to getting Trump elected makes more sense when you consider this article.

    His enormous wealth is largely stored in the form of Tesla stock, and that stock has been valued based on the belief that it isn’t a car company, it’s a robotaxi service currently selling the hardware to finance the software development. The value – and his wealth – can persist indefinitely as long as investors continue to accept that premise, no matter how long delayed. But if something tangibly undermines that premise, Musk could conceivably lose the majority of his wealth overnight.

    The National Highway Traffic Safety Agency is probably the greatest threat to his wealth. He doesn’t worry about competitors or protestors or Twitter users or advertisers. They’re all just petty nuisances. But the federal regulator over roads… that is his proverbial killer snail. And I think fully capturing the entire federal regulatory state is his strategy to permanently confine that snail.

    More than anything else, I think that’s what is motivating his radical embrace of fascism.



  • This is so exciting. I worked in a lab where we were trying to do this, and so I was very aware what a gold rush we were in. I’m so glad to see that it’s actually happening.

    This is truly a watershed moment in science. This is going to mark a major turning point in cellular medicine from theory to commonplace care. Eventually, this will end the pharma industry’s insulin cash cow.

    But it’s even bigger than that. Because once we can engineer cells that produce a natural product, the next step is to engineer cells that produce synthetic medicines. Antidepressants, birth control, hormones, weight loss drugs, boner pills… The frontier is huge, lucrative, financially disruptive for pharma companies and life changing for patients. This is a big moment in history, and we all need to be fighting harder than ever to end for-profit healthcare. Otherwise we’re going to end up with subscription licenses to our own bodies.


  • This article doesn’t really answer most of my questions.

    What subjects does the AI cover? Do they do all their learning independently? Does AI compose the entire lesson plan? What is the software platform? Who developed it? Is this just an LLM or is there more to it? How are students assessed? How long has the school been around, and what is their reputation? What is the fundamental goal of their approach?

    Overall, this sounds quite dumb. Just incredibly and transparently stupid. Like, if they insisted that all learning would be done on the blockchain. I’m very open minded, but I don’t understand what the student’s experience will be. Maybe they’ll learn in the same way one could learn by browsing Wikipedia for 7 hours a day. But will they enjoy it? Will it help them find career fulfillment, or build confidence or learn social skills? It just sounds so much like that Willie Wonka experience scam but applied to an expensive private school instead of a pop-up attraction.





  • I don’t think it’s secret. A lot of OpenAI’s business strategy is to warn of the danger of their own project as a means of hyping it.

    OpenAI, despite having produced a pretty novel product, doesn’t really have a sound business model. LLMs are actually expensive to run. The energy and processing is not cheap, and it’s really not clear that they produce something of value. It’s a cool party trick, but a lot of the use cases just aren’t cost effective at this point. That makes their innovation hard to commercialize. So OpenAI promotes itself like online clickbait games.

    You know the ones that are like, ‘WARNING: This game is so sexy it is ADDICTIVE! Do NOT play our game if you don’t want to CUM TOO HARD!’

    That’s OpenAI’s marketing strategy.





  • Haha a bike.

    I hold out hope, actually, that as the right-to-repair movement continues to grow, eventually repairability and control will become more common consumer interests, in the same way that vehicle safety wasn’t something people thought about when buying a car before the 70s, and now it’s one of the main influences when buying a car.

    Once people start caring – and again, I believe this is the direction we’re heading – it will become something manufacturers have to design for.


  • This is modestly interesting. My brother worked here before they had layoffs about two years ago, and had a generally favorable opinion of the company and leadership.

    Fundamentally, while I think RJ seems like a sound businessman and technologist, and I like the company’s taste a bit, I will never be able to reconcile his views with mine. He very openly views cars as computers and software and services that happen to move you around, and I would like it to be a machine over which I have as minimal a relationship as possible with the manufacturer after I acquire the product.

    Still, I wish them luck.


  • This is actually a misrepresentation of the law.

    The law bans school districts from requiring teachers to report if students start experimenting with different pronouns.

    Teachers can still report this to parents. There is nothing barring them from doing so. The only change is that they aren’t policed by their school district.

    Technically, this is actually the classically conservative position!

    This whole thing is extremely stupid. Parents should take care of their own shit. You want to know what your kid is thinking? Talk to them. Demanding that the trusted adults in their lives who DO pay attention to them narc for you is a weak-ass move for parents who run to the nanny state to help them raise their kids because they don’t know how to manage their own damn family life.


  • I think maybe execs and investors might feel it’s all the same, but if you’re a project manager for cloud infrastructure for enterprise services or you’ve been working for years on releasing a new component of Bing search that you think is a real gamechanger and some muckity-muck at the top says, ‘Oh, don’t worry about that anymore: a property manager that’s owned by a private equity partner of one of our big investors wants the chatbot that schedules apartment viewings in Huntsville to be more flirty, so go massage the prompts to make it convincingly laugh at bad jokes,’ some of those folks are liable to start grumbling that this isn’t the role that they were pitched when they took this job.




  • Why do you guarantee that? It seems obviously wrong, on a technical level.

    The point I’m making is that even if we take it as a given that a shrewd enough AI could correctly distinguish sex at birth – which I think is obviously impossible based on the appearances of many ciswomen and the nature of statistical prediction – you’d still need a training data set.

    If the dataset has any erroneous input, that corrupts its ability, and the whole point of this exercise is trying to find passing transwomen. Why would anyone expect that training set of hundreds of thousands of supposed cis women wouldn’t have a few transwomen in it?


  • This is a great point.

    The technology that excludes transwomen from the app is the clear warning that the app is populated exclusively for transphobes. It’s obviously wildly dangerous for a transwoman to be on the app.

    The notion that AI is going to clock them is absurd AI hype. There’s no reason to expect AI to be capable of this kind of discernment, and that assumes you even had a training set. Where in the absolute fuck would someone find a training set like that?

    Edit: I didn’t read the article. It seems it’s a lesbian dating app. Well, probably less dangerous for transwomen, but still not technically sound.