FFS this is a terrible idea. What the actual fuck https://gizmodo.com/googles-ai-will-help-decide-whether-unemployed-workers-get-benefits-2000496215
From the article: “There’s no AI [written decisions] that are going out without having human interaction and that human review,” Sewell said. “We can get decisions out quicker so that it actually helps the claimant.”
You absolutely know that this will turn into "Computer says no." we've seen time and time again that people just accept the system outputs. See Horizon, Robodebt, other scandals for details. This will just be the next one.
@CatherineFlick They mean google's ai will make up reasons to deny benefits.
@xi_timpin yep, with all of the bias baked in from 4chan and reddit :/
@CatherineFlick I'd not realised it was so sophisticated - more likely you're claiming => you're not eligible.
Just like today.
@CatherineFlick What could go wrong?!
@d absolutely nothing, nothing at all. Nothing to see here. Move along.
@CatherineFlick Haven't some insurances started doing something similar?
And if you ask "why was I denied" the answer you get is fundamentally just "ask again and hope the AI outputs a different result"
@tek_dmn it's the new "computer says no" :/
@CatherineFlick
One major problem with LLM GIGO (garbage in, garbage out), coupled with blind faith, creates major issues and threats.
@CatherineFlick
I suggest you thoroughly research Australia's scandalous #Robodebt to see how drastically this can go wrong.
Too much to link here, please Google.
@RHW oh yes as an Aussie I’m well aware of the robodebt scandal! I think I mentioned this in one of my posts too. That and Horizon… we over-rate the outputs of systems even when we know they have problems!
@CatherineFlick It would be nice if these ignorant pennyrubbing technocrat ghouls would take the time to read Weizenbaum.
@einarwh yes! And you just know that the “human in the loop” will be some poor precarious worker on minimum wage with very little training “because the computer does the work”.
@CatherineFlick I mean how would the human break out of the loop, stop the loop, say the loop is all wrong? The human in the loop is just a hostage.
@einarwh @CatherineFlick i will not bring up "Ironies of Automation" again.
I will instead bring up Richard Cook "Those responsible have been sacked", a lovely paper I recommend.
They are not a hostage. They are our sacrifice to the machine gods so we can sleep at night. If something goes wrong, we can sacrifice that bad apple and forget everything about the sins we did. We cleansed ourselves.
@einarwh @CatherineFlick another term I like to use is "blame fuse".
Blaming the machine or some higher part of the org would be costly. We would have to change them, which would cost a lot, have wide ranging impact, take efforts and possibly make powerful people look like fools. We may even have to pay compensation for torts.
So instead you install a blame fuse. The fuse come into the org, the human is fired, you replace them, everything keeps working like before but the blame was absorbed.
@einarwh @CatherineFlick ofc this would call into question the ethics of designing "human in the loop" automation, as it looks like it would be quite unethical seen through that lens
@einarwh @CatherineFlick you two should know that seeing this float past in my timeline the other day has led to me spending far too much time making this slide for a conference talk I'm doing tomorrow
@prehensile @einarwh that is beautiful!
@CatherineFlick OK there is a human review but ...
"Any lack of accuracy concerns the lawyers with Nevada Legal Services. If the AI appeals system generates a hallucination that influences a referee’s decision, it not only means the decision could be wrong it could also undermine the claimant’s ability to appeal that wrong decision in a civil court case.
“In cases that involve questions of fact, the district court cannot substitute its own judgment for the judgment of the appeal referee,” said Elizabeth Carmona, a senior attorney with Nevada Legal Services, so if a referee makes a decision based on a hallucinated fact, a court may not be able to overturn it.
In a system where a generative AI model issues recommendations that are then reviewed and edited by a human, it could be difficult for state officials or a court to pinpoint where and why an error originated, said Matthew Dahl, a Yale University doctoral student who co-authored the study on accuracy in legal research AI systems. “These models are so complex that it’s not easy to take a snapshot of their decision making at a particular point in time so you can interrogate it later.”"
[from the article]
@marjolica yay even better than I thought too
@CatherineFlick Truly a hellscape distopia we're living in. Maybe they'll get rid of judges and juries too. Replace those with AI...
@CatherineFlick To be fair, 40 years ago my colleague Peter Mott and I wrote an #AI system to help adjudicate social security benefits, and it generated very good explanations of its decisions which lay people could understand.
But it was able to do this because it made its decisions through a rigorous system of formal logic, not by throwing large volumes of text at a neural network and expecting magic to happen.
https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0394.1987.tb00133.x
@simon_brooke is there by any chance a free version of the paper available?
@tzz I've certainly got paper copies I can photocopy. I think I have a PDF -- I'm running a search just now.