Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 0 Posts
  • 624 Comments
Joined 2 years ago
cake
Cake day: March 3rd, 2024

help-circle





  • I could see you not reacting well to the gift and them being upset, but then it turned into something more than that. They made the mistake of doing something that you claim is well known you don’t like. You held your line and rather than let it sit for a bit insisted it had to go. Now you’re both mad/upset over a gift. Doesn’t make sense, does it? Even more so if the value of this object isn’t that much even new. Who is hurt more by this? You’re confused about their reaction but were you hurt by the act of giving, even if it was something unwanted? The core thing you should ask yourself is why it became an argument, and was it worth it? It doesn’t even matter who was right.




  • selectivity based on probability and not weighing on evidence

    I don’t follow this, but an LLM’s whole “world” is basically the prompt it’s fed. It can “weigh” that, but then how does one choose what’s in the prompt?

    Some describe or use the analogy of an autocompleter with a very big database. LLMs are more complex than just that, but that’s the idea, and when the model looks at the prompt and context of the conversation, it’s choosing the best match of words to fulfill that prompt. My point was that the best word or phrase completion doesn’t mean it’s the best answer, or even right. It’s just seen as the most probabilistic in the huge training data. If that data is crap, the answers are crap. Having Wikipedia as a source and presumably the only source is better than many places on the internet to pull from, but that doesn’t guarantee the answers that pop up will be always correct or the best in a choice of answers. It’s just the most likely based on the data.

    It would be different if it was AGI because by definition it would be able to find the best data based on the data itself, not text probability, and could look at anything connected including discussion behind the article and make a judgement on how solid the information is for the prompt in question. We don’t have that yet. Maybe we will, maybe we won’t for any number of reasons.




  • An LLM with a cultivating source is a lot better than what the other major ones are, but it still has the issue of selectivity based on probability and not weighing on evidence (unless it does that, which would be huge). Because people are naturally gullible and believe the first thing they read, especially if it’s presented as if “someone” has validated it for them.

    But the good part is that both DDG and Firefox made it both obvious and easy to disable the AI.






  • You’re right about it being different. That’s why the argument of driving in snow doesn’t hold up. Driving in snow that stays crunchy snow *IS easy. Northerners who have that as well as plowing equipment think it’s easy because it is for them.

    The temperatures are another thing. They can keep them. Not a fan of negative numbers, regardless of which scale is used. Definitely not F, nope.


  • Or it just stays in the conditions where it’s more likely to sleet, which builds up and freezes into a sheet. Doesn’t matter what tires you have, what 4-wheel drive, where you’re from, or how much experience you have driving in snow… if there’s a lot of ice on the road, you will hit some slick spots, and how sure of your being immune to physics will demonstrate itself.