If this works, why not have an AI automatically investigate Judges and government officials. The AI should indicate for example if the judge needs to recuse him or herself… That came up several times this year. And for politicians, the AI would tell us if they are lying or if they are allowing or taking part in corruption. For this purpose, they should wear a microphone and camera the entire time they are government officials. Don’t like it? Too bad, that’s the law. Right?
why not have an AI automatically investigate Judges and government officials
Because the power is supposed to originate with said Judges/Officials. The AI tool is a means of justifying their decisions, not a means of exerting their authority. If you give the AI power over the judges/officials, why would they want to participate in that system? If they were proper social climbers, they would - instead - aim to be CEOs of AI companies.
dont hey always tell us
‘you have nothing to worry about if you have nothing to hide’
uh huh… hows that feel now, government employee?
Yep.
Ya 'all like surveillance so much, let’s put all government employees under a camera all the time. Of all the places I find cameras offensive, that one not so much.
I sure hope you get your daily dosis of enjoying people’s misery watching the substitute teacher crying in the teacher’s lounge.
Cameras in a teacher’s lounge would be ridiculous but, in principle, cameras in classrooms make a lot of sense. Teachers are public officials who exercise power over others, and as such they need to be accountable for their actions. Cameras only seem mean because teachers are treated so badly in other ways.
Sure thing, buddy. They exert such power that they can barely make teens stay put for dice minutes without fucking around with their phones. So much power.
You don’t know what power is.
Let’s not confuse ourselves here. The opposite of one evil is not necessarily a good. Police reviewing their own footage, investigating themselves: bad. Unreliable AI beholden to corporate interests and shareholders: also bad.
It’s fine to not understand what “AI” is and how it works, but you should avoid making statements that highlight that lack of understanding.
If you feel one’s knowledge is lacking then explaining it may convince them, or others reading your post.
Speaking of a broad category of useful technologies as inherently bad is a dead giveaway that someone doesn’t know what they’re talking about.
I have a sneaking suspicion if police in places like America start using AI to review bodycam footage that they’ll just “pay” someone to train their AI so that way it’ll always say that the police officer was in the right when killing innocent civilians so that the footage never gets flagged That, or do something equally as shady and suspicious.
These algorithms already have a comical bias towards the folks contracting their use.
Case in point, the UK Home Office recently contracted with an AI firm to rapidly parse through large backlogs of digital information.
The Guardian has uncovered evidence that some of the tools being used have the potential to produce discriminatory results, such as:
An algorithm used by the Department for Work and Pensions (DWP) which an MP believes mistakenly led to dozens of people having their benefits removed.
A facial recognition tool used by the Metropolitan police has been found to make more mistakes recognising black faces than white ones under certain settings.
An algorithm used by the Home Office to flag up sham marriages which has been disproportionately selecting people of certain nationalities.
Monopoly was a lie. You’re never going to get that Bank Error In Your Favor. It doesn’t happen. The House (or, the Home Office, in this case) always wins when these digital tools are employed, because the money for the tool is predicated on these agencies clipping benefits and extorting additional fines from the public at-large.