5 Comments
Mar 31, 2021Liked by Jon Stokes

I find this article extra disappointing because you're consistently so close. I'll work through this with a concrete example, because I think that makes things simpler.

You seem to agree with 1-4 from your society has problems tweet, yet you seem to take issue with people suggesting that the solutions to those problems can't be entirely technical, and that in some cases the cost of perpetuating society's problems today may be greater than the value gained from automating them.

Say we propose to replace cash bail with a risk model, as California recently voted on. There are lots of possible models one could propose. Trivially, you could hold everyone, or let everyone free. Both of those present statistical bias: the model will misclassify some people. But does the model have any racial bias? Well a model that lets everyone free arguably does not (and you're free to argue this, please do!), but a model that holds everyone perpetuates any existing biases in the police force, so if for a particular group is more likely to be arrested even if they aren't more likely to commit crime, they'll be held in jail more than they should be. A statistical *and* racial bias.

But note that if the police force isn't racially biased to begin with, the (naive, but still) statistically biased algorithm won't make things worse.

But we don't use those naive algorithms, we use complex ML models. And in those cases, they can encode additional bias not solely present in the initial data. Other people and papers explain this far better than I can, but imagine you overpolice a specific group, the data will show that they're more likely to commit crime. If you train a model on that data, you now have the police arresting people more, and the model holding more of those people, a double dip of racial bias, despite only a single-dip of statistical bias.

So we both start from the assumption that "society has problems", then one possible goal is to reduce the statistical bias as close to zero as possible. This, at best, makes things no worse than they are. But even a perfectly statistically unbiased algorithm may increase inequality, if for example its use makes some pre-existing socially biased system more efficient.

So the question then is "what are you mending?" Reducing statistical bias is usually, probably, a good thing. But not always: sometimes intentional biases in a system reduce inequality although this can be controversial along multiple axes. For a non-ML example, consider voting districts gerrymandered to be majority-minority so that minorities would have some representation. This requires thinking about intended outcomes: should congress (or the bag of marbles) represent the mean citizen or the median? Which is more important: geographic or racial representation?

"Better data" or "better algorithms" won't solve those problems. And naively applying ML, even "unbiased" ML may exacerbate those problems. You can similarly see this issue in debates around algorithmic approaches to redistricting, and how those may result in less-representative districts in some cases, even if they do reduce partisan gerrymandering.

Further, you seem to take serious issue with many of the ethicists, framing them as rather radical for saying that we should be cautious about how we approach using ML and similar tools. But their position is a fundamentally conservative one. If, right now, at this very moment, ML isn't being used somewhere, it is more radical to add some sort of ML than to change nothing.

Here, you claim that the ethicists don't propose solutions beyond toppling the whole system (where the whole system may not just mean ML, but all of society or something). But that isn't true. In googles-colosseum, you acknowledge that there's another solution they propose: intentionally modifying the data (or the model) to better represent the way the world "should" be. You claim that this is a non-starter for Google, but it isn't. They absolutely do it in some cases already. Perhaps it isn't possible in all cases, for technical reasons, but then the question of "should we introduce a system that perpetuates inequality that we all agree is there" is a really important one! Maybe the value is still worth it, but maybe it isn't, so we should ask the question (and some of the work of the ethicists you criticize is frameworks for addressing similar questions).

On twitter, you also seemed to have a personal aversion to the "Separation" approach. If I understood your argument correctly, it was that since we can't all agree on the way we want the world to be, the best approach is to not try to influence things, and just have the models do what they do (https://twitter.com/jonst0kes/status/1374827973677834246).

There's a number of problems with this though. First of all, someone's gonna need to make choices that impact the system. Something as simple as the L1 vs. L2 norm can affect how a model perpetuates inequality. So ultimately it isn't HRC vs. SBC, it's HRC or SBC or the Google engineers building the model. Someone is evaluating its performance. You can't escape that.

And you might say that your response, that the Google engineers don't represent "activists with a niche, minority viewpoint", which might be true. But that doesn't mean that the model represents the linguistic status quo. Just the opposite in many cases: there's lots of cargo-culting of various meta-parameters (e.g. lists of topics or words that should be excluded when building the model) that perpetuate inequality (and statistical bias). Even if you consider that good work, and it's only a subset of the work by the activists that's objectionable, it's still not clear that what you'll end up with. Are you really sure that a model built by a company with the ultimate goal of being profit-driven will be better for you representative than one who aim for it to be socially responsible? Also, I think you almost stumble on one of the major criticisms of LLMs but carefully avoid it: what if you could have both the HRC and the SBC model, and could pick between them? Then the issue is entirely political: you convincing Google which one to use (or offering both). As is, it's cost prohibitive to build both

Back to the main point: the thesis of many of the ethicists is that technical fixes to a model won't fix everything. That's very different than how you represent them, which appears to be that they are anti-technical fixes. Beyond that, you claim that they offer no solutions. This is half true: they are careful to say that there's no silver bullet. But they do offer tools to improve things, both technical and organizational/social/procedural. You seem to reject many of those non-technical tools, not because they're fundamentally, but because of personal aversions to the politics of the people offering them. But framing them as radical end-its, when they're ultimately conservative-in-how-we-apply-and-use-these-tools people who propose solutions you're personally averse to seems profoundly unfair.

There's also a meta-conversation that continued focus on only the technical fixes is counterproductive to solving the (again: real problem that we all agree on) that ML will amplify inequality in some cases. Fixing the bias doesn't fix the inequality. So saying the bias is fixable won't fix "it", where "it" is what pretty much every lay person cares about.

Expand full comment
author

Josh, thanks for this really fantastic, thorough response. There's a lot to think about in here, and you raise many excellent points. I really appreciate when people push back thoughtfully.

I'm dealing with some "second dose of Moderna" side effects this morning, but as soon as I can I'd like to go through this and reply.

Expand full comment
Mar 30, 2023Liked by Jon Stokes

incisive, clear thought and timely observations most welcome in this time of madness. A+, tweeting all about it, as I said, a banger.

Expand full comment
Mar 31, 2021Liked by Jon Stokes

Awesome thorough article. Earned a subscription from me.

Expand full comment

Well, consider yourself corrected, then.

Expand full comment