• 0 Posts
  • 38 Comments
Joined 2 years ago
cake
Cake day: July 8th, 2023

help-circle

  • Because when US politicians advocate for a single, global market, and a single, global internet, it is with the understanding that US firms and allied parties will dominate the space anyway. When that is no longer the case they get about as nervous as the Chinese got when they went and built the Great Firewall and made a clone of every popular western platform. Now that US/Western dominance is seriously challenged, we are seeing more and more signs of protectionism.







  • Ooph. I guess we can get really dismayed at this, or maybe we should just think that given his infamy it’s only natural he’s able to monetize it, one way or another. So many influencers produce content that is on the face of it of very little value. Granted, not so many are making a career out of promoting the most toxic aspects of (so-called) masculinity. Even Jordan Peterson has redeeming qualities compared to this. Of course that’s a low bar. A low bar that seemingly many men are happy to pass and fork money for in the hopes of bettering themselves.







  • Unfortunately unless you are a tiny niche community that isn’t ever targeted by spam or idiots (and how common is that really), moderators are a necessary evil. You probably don’t hate moderators. You probably hate bad/aggressive/biased/etc moderators. Or maybe sometimes you are the problem, I don’t know. It is not a problem with an easy solution. Usually large forums with no moderation become quickly unbearable to most people. And then moderators become in turn unbearable to some people.

    Maybe a trusted AI can do a better job at this - like give it the community rules and ask it to enforce them objectively, transparently, and dispassionately, unless a certain number of participants complain, in which case it can reverse its decision and learn from that.




  • The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.