Sunday, July 29, 2018

Something like this model of value differences

three posts about how we can deal with being different levels of rational at different times, or how we can (and should!) be ok with having heuristics while not being ruled by them: one two three.
(takes a long time, I totally understand if you want to just tune out now)

Something like the model in the last post makes a lot of sense:
explicitly model system
feel emotion
endorse value based on essence (reified or not)

As we learn more, we move up the ladder, going from "well, there is this magical thing called 'justice' and it is good" to "I don't know, I just feel bad when you wrong me, and somewhat better when you are punished for it" to "here's how we can most effectively prevent crimes."

Or, "fruits and vegetables are healthy" -> "I feel better when I eat them" -> "I need all these 1000 vitamins and minerals in just these proportions, so eating these foods will satisfy them."

(Notice that we're not, in the "healthiness" case, actually fully successfully at the "explicitly model system" phase yet! Soylent tried, but they're idiots.)

I think it's straightforward to say that we should strive to be as far up the ladder as we can be, while being humble about where we are on the ladder.

(Also, most arguments about politics or whatever would be solved if you're just more specific.)

I'll hold off on the question of "do people have real value differences?" because I don't know. I was just sitting on these and wanted to post something about them, but don't have a fully formed thought yet.

Monday, July 02, 2018

Algorithmic fairness is "philosophy: hard mode"

Got thinking about "fairness" today.

Most of the conversation I've heard has been pointing out things that are obviously and grossly unfair. Like sentencing algorithms that give black people longer sentences or the DHS/ICE thing that recommended detention for 100% of people.

That's a good place to start! For these dumb cases, we can probably say "stop that." Sure. But it doesn't give us a lot to go on in terms of figuring out algorithmic fairness as a field.

Algorithmic fairness didn't used to be a problem. For the old-fashioned analogues of a lot of things that are getting algorithmized today, you would just accept a little inaccuracy, and that was fine. If you had a little warehouse and you were trying to hire workers, you'd try to hire the best workers, and then as long as they were working hard, and you shipped as many things as you needed, that was probably fine. You didn't have to think too hard about what basis you're hiring workers on. "Well, honest hard workers, that's all!" But if you're Amazon and you're trying to stock your modern warehouse, every 1% efficiency boost is 1% more profits, so if you can fire Slightly-Less-Efficient Steve and hire Efficient Earl, company-wide, you'll make another $100m or something. And you can monitor them like never before - and tweak your scanner-machine algorithm to squeeze out every last second!

So now, if you're Amazon, you have to think about the weird questions of, what do you actually want from your workers? And how are you going to tweak your algorithms to optimize for this? And if you're regulators or organized labor, how do you tweak your laws to force Amazon not to optimize for the wrong thing?

Turns out, we never even solved this! We never agreed on what is fair/just/Right to ask from workers; we just kinda outlawed the worst abuses, settled on some defaults like "40 hour work week", and hoped "people being decent" would do the rest. (in the case of labor, Reagan and the anti-union last couple decades apparently decided "let the companies optimize for whatever they want, it'll be fine!")

Take the same argument and turn it on pricing models, or prison sentencing, or ad targeting. We never solved the underlying problem; we just kludged it along to a point where nobody with enough power argued too terribly much. Now we're trying to algorithmatize stuff, and we're realizing all these gaps exist.

I guess this is why people are working on this. But I think it's under-resourced if it's being treated as just an HCI problem. (or "just a ____ problem", where ___ is any one field.) We're prying up a couple rotten floorboards, and we're going to discover our whole ethical foundation is not really as strong as we'd hoped.
(or maybe, instead of "we're going to discover", I mean "legal scholars and philosophers already know, but techies are now discovering.")