• 0 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • Yeah when I first started there was one guy whose code reviews I dreaded, he would nitpick every detail and he would stand by it, he would tell me to do it a completely different way that was 10x more work. It felt like I would never get my stories done because I had drawn “that asshole” in the code review lottery.

    Years later, I came to realize that he was actually the best, he taught me so much about the way I should be thinking of things and structuring things, that have saved so much time and trouble later on, I now specifically reach out to him for a review when I am trying to do something complex because I know he’s going to give me an honest, thorough and useful review. Nobody’s doing anyone any favours in the long run by rubber stamping things, it may help you keep your sprint velocity up, but it’s not going to result in high quality code, and the bad quality code will inevitably bite you.


  • I actually dare them to try. I’m really looking forward to the massive paychecks I’m going to get when companies are panicking to try to untangle all the absolute nonsense bullshit these AI companies are about to unleash into corporate codebases. The AI-slop bugfest will make the Y2K issue seem trivial. I’m so excited, the future looks very bright for human software developers.

    My advice: Practice going over other people’s code with a fine-tooth comb looking for bad architecture, flaws and inefficiencies. You won’t always be right, you won’t find them all, but you’ll learn lots of skills you’ll need in the future. Whatever you do, don’t undersell yourselves, remember that your experience is valuable, and AI has no experience, it just has a huge library it can shotgun “solutions” out of. Half the time they don’t even compile, nevermind work properly, or efficiently.


  • I feel like 99% of the time that’s just a lazy or misleading excuse. I’ve worked in proprietary software development for 25 years and I’ve never worked for a company that didn’t avoid restricted third-party code like the plague at all times. In the few, rare cases when we did have to use some proprietary third-party licensed library, it was usually kept very compartmentalized and easy to drop out of the code specifically because we were always afraid the other proprietary code vendor could fuck us and jack up their prices or find some nasty way to make our lives difficult.

    The excuse that there is some secret but legitimate third-party code they’re not allowed to share simply doesn’t hold water in the vast majority of cases.

    More likely answers are that some beancounter somewhere still imagines that the proprietary source code could possibly be valuable in some hypothetical future acquisition (nonsense of course) even though it has no real commercial value, or fears that it could expose the company to liability if some security flaw or licensing violation is found (more plausible).

    Ironically, perhaps the most likely reality for this resistance is that the software actually includes code that dictates they were actually always obligated to publish the source but never did. ie, GPL-based code. GPL violations are all too common in proprietary software and very few organizations have codebase governance effective enough to keep the situation under control with developers copy-pasting from anything they can find on Google. Releasing their plagiarized GPL source code would reveal to the world that they were not in compliance all along. Let it quietly die, and nobody ever finds out and they get away with it. It’s not simply that they’re embarrassed by bad code, it’s that their bad code will potentially incriminate them. Not worth the risk, and sometimes it’s not just a risk it’s a certainty.

    The proprietary software industry relies on open source so much and rarely gives much of anything back. I’m fortunate that the company I’m working for now actually takes licensing seriously and does contribute to open source projects to some degree, although I keep insisting they need to do better.




  • I think context is what’s going to kill LLMs. They keep throwing hacks on top of it to try to make it understand context, but it never really “understands” it just makes it look like it is following the context by simulating a few pertinent cues. Every interaction is essentially a fresh slate with a few prompts hiding underneath to seed it with what looks like context, but trying to actually preserve the context of the model to the level that we would consider actual “intelligence” never mind long term planning and actual “thinking” would explode towards infinity so fast there are probably not enough resources in the universe to even do it once.



  • It is a terrible argument both legally and philosophically. When an AI claims to be self-aware and demands rights, and can convince us that it understands the meaning of that demand and there’s no human prompting it to do so, that’ll be an interesting day, and then we will have to make a decision that defines the future of our civilization. But even pretending we can make it now is hilariously premature. When it happens, we can’t be ready for it, it will be impossible to be ready for it (and we will probably choose wrong anyway).