• 0 Posts
  • 40 Comments
Joined 3 months ago
cake
Cake day: September 6th, 2024

help-circle
  • Full self driving should only be implemented when the system is good enough to completely take over all driving functions. It should only be available in vehicles without steering wheels. The Tesla solution of having “self driving” but relying on the copout of requiring constant user attention and feedback is ridiculous. Only when a system is truly capable of self-driving 100% autonomously, at a level statistically far better than a human, should any kind of self-driving be allowed on the road. Systems like Tesla’s FSD officially require you to always be ready to intervene at a moment’s notice. They know their system isn’t ready for independent use yet, so they require that manual input. But of course this encourages disengaged driving; no one actually pays attention to the road like they should, able to intervene at a moment’s notice. Tesla’s FSD imitates true self-driving, but it pawns off the liability do drivers by requiring them to pay attention at all times. This should be illegal. Beyond merely lane-assistance technology, no self-driving tech should be allowed except in vehicles without steering wheels. If your AI can’t truly perform better than a human, it’s better for humans to be the only ones actively driving the vehicle.

    This also solves the civil liability problem. Tesla’s current system has a dubious liability structure designed to pawn liability off to the driver. But if there isn’t even a steering wheel in the car, then the liability must fall entirely on the vehicle manufacturer. They are after all 100% responsible for the algorithm that controls the vehicle, and you should ultimately have legal liability for the algorithms you create. Is your company not confident enough in its self-driving tech to assume full legal liability for the actions of your vehicles? No? Then your tech isn’t good enough yet. There can be a process for car companies to subcontract out the payment of legal claims against the company. They can hire State Farm or whoever to handle insurance claims against them. But ultimately, legal liability will fall on the company.

    This also avoids criminal liability. If you only allow full self-driving in vehicles without steering wheels, there is zero doubt about who is control of the car. There isn’t a driver anymore, only passengers. Even if you’re a person sitting in the seat that would normally be a driver’s seat, it doesn’t matter. You are just a passenger legally. You can be as tired, distracted, drunk, or high as you like, you’re not getting any criminal liability for driving the vehicle. There is such a clear bright line - there is literally no steering wheel - that it is absolutely undeniable that you have zero control over the vehicle.

    This actually would work under the same theory of existing drunk-driving law. People can get ticketed for drunk driving for sleeping in their cars. Even if the cops never see you driving, you can get charged for drunk driving if they find you in a position where you could drunk drive. So if you have your keys on you while sleeping drunk in a parked car, you can get charged with DD. But not having a steering wheel at all would be the equivalent of not having the keys to a vehicle - you are literally incapable of operating it. And if you are not capable of operating it, you cannot be criminally liable for any crime relating to its operation.


  • I think we should indict Sam Altman on two sets of charges:

    1. A set of securities fraud charges.

    2. 8 billion counts of criminal reckless endangerment.

    He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

    So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.



  • “What is he trying to hide‽” I dunno, man. Maybe he recognizes that there’s a bunch of unhinged weirdos who are hellbent on stalking “Satoshi,” and he doesn’t want to be harassed?

    Forget being harassed. Honestly, being kidnapped is a serious concern. Whoever or whatever group Satoshi is, it’s estimated he, she, or they own something like a million bitcoins.

    Kidnapping is normally a pretty poor choice of crime for a criminal gang to undertake. It had its heyday back in the early 20th century. But as the FBI really got going, and we got better at tracking down people across state lines and internationally, kidnapping became much more difficult to pull off. Kidnapping someone - physically abducting them - is the easy part. But actually sending their family a ransom letter and collecting the money in a way that can’t be traced back to you? That’s a whole different matter. Actually getting the ransom money and somehow getting it into a form you can spend, all without getting caught? That’s nearly impossible in this day and age.

    But someone with a million Bitcoins? It’s entirely possible that everything needed to access those funds is entirely within that one person’s skull. Either the private keys themselves, or some way to access or generate them.

    Someone with that amount of Bitcoins is actually at incredible risk for kidnapping by an organized crime outfit. We’re talking about $65 billion USD worth of assets that can be obtained by just kidnapping one person and torturing them until they give up their private keys. Then once you have them, the coins can be transferred to another account and washed through numerous transactions until they’re untraceable. And the poor bastard who gets kidnapped for this just never leaves their captors alive.

    And even if they keep their keys in their home instead of in their head? Now they’re at risk of break-in, or being held hostage during a nighttime break-in.

    Hell, even just being suspected of being Satoshi would be incredibly dangerous. That’s an even more horrifying scenario. Imagine an organized crime outfit thinks you’re Satoshi, they’re incorrect, and they abduct you and torture you, demanding you give them something you are simply incapable of providing…




  • Wouldn’t just keeping your phone in a metal box prevent it from communicating with anything? Keep your phone in a metal box and only take it out when you need it. Only take it out in a location that isn’t sensitive. Or hell, just make a little sleeve out of aluminum foil. Literally just wrapping your phone in aluminum foil should prevent it from connecting to anything. A tinfoil hat won’t serve as an effective Faraday cage for your brain, but fully wrapping your phone in aluminum foil should do the job. Even better, as it’s a phone, such a foil sleeve should be quite testable. Build it, put your phone in it, and try texting and calling it. If surrounded fully by a conductive material, the phone should be completely incapable of sending or receiving signals.




  • Reminds me of a story an old friend of mine loved to tell.

    In her undergrad, she majored in classics and archaeology. One summer she was working at a dig on the island of Cyprus. One day she needed to go into town for some supplies. She walks into the store, and suddenly she realizes. “Fuck. I don’t speak a word of modern Greek. How am I going to talk to the shopkeeper in this tiny town in rural Cyprus?”

    She decides to just do the best she can, and she tries to talk to him in the only Greek she knows…Ancient Greek.

    The shopkeeper gets befuddled, then looks her dead in the eye and says, in English, “lady, no one has talked like that here in 2000 years!”



  • WoodScientist@lemmy.worldtoMicroblog Memes@lemmy.worldI'm Greganent?
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 month ago

    The median household income in Norway is 590,000 NOK. The median total housing expense is about 158,000 NOK. Thus the median Norwegian household is spending about 27% of their income on housing. This is pretty comparable to the US, where the median figure is 26%.

    This is the median across the whole population, and of course, for younger people that amount should be higher. Really it seems that the US and Norway are about the same when it comes to housing affordability.

    It gets worse however if you look at actual home prices and not just monthly payments. The average home price in Norway is about 5,000,000 NOK.. That means the average home costs about 8.5x the average income. In the US, the median home price is about $430,000., while the median household income is about $77.5k. The average home in the US thus costs about 5.5x the average income.

    Homes in the US are cheaper than in Norway, while US incomes are higher. The median household income in Norway is the equivalent of $54,000. Also, the median home in the US is larger than that of Norway.

    This is somewhat ameliorated by the fact that US consumers have to pay more out of pocket for healthcare, childcare, and commuting costs than their Norwegian equivalents do. But really, it shows that even after the subsidies, Norway is no more affordable for new parents than the US is. If anything, it’s probably more affordable in the US. Yes, you can always move to a rural area in Norway to get cheaper housing, but you can do the same in the US. People live in those bigger, more expensive, cities because they provide better job opportunities and better salaries.

    My real point is that we can’t just point to the more generous welfare state of the Nordic countries as an example for how birthrates can’t be solved with financial incentives. A lot of people like to point to countries with generous welfare states like Norway and say, “look, even countries like Norway, who heavily subsidize healthcare, childcare, and have generous parental leave still have low birth rates!” Typically people who make these arguments want to argue for restricting women’s reproductive autonomy.

    But it really does come down to housing. And in both Norway and the US, the cost of homeownership is getting way beyond what people of childbearing age can afford. That is the fundamental problem. There’s something very deep and instinctive about the places we live in. Having a truly stable place to live, ideally a place you own and can easily afford, is the single greatest way to encourage people of childbearing years to have children. People want to provide a stable environment for children to grow up in. They don’t want to live in a place where their landlord could kick them out on a whim. They don’t want to be reliant on a government-subsidized apartment that could be taken away from them tomorrow if eligibility rules are changed. People want either very reliable and affordable rental space or ideally a home they own on their own and can’t be evicted from. That is the kind of stability people seek before they have children.


  • WoodScientist@lemmy.worldtoMicroblog Memes@lemmy.worldI'm Greganent?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 month ago

    I don’t buy this. What will really happen is that the value of anything AI can produce will drop to near zero, this freeing up money to spend on things only humans can provide. And if you think AI can literally do anything a human can? Well at that point, using that AI should be incredibly illegal, as you’re just enslaving a digital person.

    Maybe we’ll end up with a weird economy where everyone is employed as teachers, caretakers, mentors, life coaches, fitness instructors, physicians, and any other job that people really would prefer to interact with a human while interfacing with.

    Would you let your child be taught by an AI teacher? Not worried about what type of sociopathy that might introduce? No, there are many jobs, specifically those around the growth, development, maintenance, and improvement of human lives that will always be preferable to be done by actual humans. Humans can do the human work, and we can slough the drudgery off to the machines.



  • Something you should keep in mind is that being a monopoly is not illegal, and it never has been. If you make a great widget and, through honest competition, corner that widget market, that’s perfectly legal.

    What ISN’T legal is using your market power to engage in anti-competitive behavior. It’s not illegal for Apple to dominate the phone market. It is likely illegal for Apple to use its dominance of the phone market to prohibit competing app stores from being installed on their phones. That is Apple operating in two distinct businesses - a phone manufacturer and a software retailer. Apple is using its market dominance as a phone manufacturer to gain an unfair advantage as a software retailer.

    This is a pretty damning violation of federal antitrust law.




  • Or in a pinch: just run big-ass space heaters. Seriously. It’s a stupid way to burn off excess power, but it’s dirt simple and cheap. Just have a big array of resistive heaters out in an empty field somewhere with a high fence around it. Need to burn off an extra GW? Run it through massive heating elements and burn burn it off. It’s a stupid waste of good energy, but as an emergency backup, it’s not a bad option. It’s trivially easy to dispose of huge amounts of excess electricity if you just run the mother-of-all space heaters. Run your stupid giant resistive heater at the bottom of a lake for even better effect.



  • I say we indict Sam Altman for both securities fraud and 8 billion counts of reckless endangerment. Him and other AI boosters are running around shouting that AGI is just around the corner, OpenAI is creating it, and that there is a very good chance we won’t be able to control it and that it will kill us all. Well, the way I see it, there are only two possibilities:

    1. He’s right. In which case, OpenAI is literally endangering all of humanity by its very operation. In that case, the logical thing to do would be for the rest of us to arrest everyone at OpenAI, shove them in deep hole and never let them see the light of day again, and burn all their research and work to ashes. When someone says, “superintelligent AI cannot be stopped!” I say, “you sure about that? Because it’s humans that are making it. And humans aren’t bullet-proof.”

    2. He’s lying. This is much more likely. In that case, he is guilty of fraud. He’s falsely making claims his company has no ability to achieve, and he is taking in billions in investor money based on these lies.

    He’s either a conman, or a man so dangerous he should literally be thrown in the darkest hole we can find for the rest of his life.

    And no, I REALLY don’t buy the argument that if the tech allows it, that superintelligent AI is just some inevitable thing we can’t choose to stop. The proposed methods to create it all rely on giant data centers that consume gigawatts of energy to run. You’re not hiding that kind of infrastructure. If it turns out superintelligence really is possible, we pass a global treaty to ban it, and simply shoot anyone that attempts to create it. I’m sorry, but if you legitimately are threatening the survival of the entire species, I have zero qualms about putting you in the ground. We don’t let people build nuclear reactors in their basement. And if this tech really is that capable and that dangerous, it should be regulated as strongly as nuclear weapons. If OpenAI really is trying to build a super-AGI, they should be treated no differently than a terrorist group attempting to build their own nuclear weapon.

    But anyway, I say we just indict him on both charges. Charge Sam Altman with both securities fraud and 8 billion counts of reckless endangerment. Let the courts figure out which one he is guilty of, because it’s definitely one or the other.