No, the FTC is not about to wade into the AI bias wars
Also included: another followup on the EU AI draft regulations
Certain corners of Twitter got a big jolt yesterday courtesy of this bombshell blog post from FTC attorney Elisa Jillson:
The post was quite strongly worded, and there was a lot of excitement around the following sections in particular:
Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.
Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.
Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.
...
Hold yourself accountable – or be ready for the FTC to do it for you. As we’ve noted, it’s important to hold yourself accountable for your algorithm’s performance. Our recommendations for transparency and independence can help you do just that. But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you. For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA. Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously, as its recent action against Bronx Honda demonstrates.
So the jig is up, right? Companies are about to start getting hauled into court and eating big fines over things like face recognition software that doesn’t work well on black faces or healthcare algorithms that reinforce racial disparities in care?
No, no such thing is likely to happen. Here’s the short version of why:
The FTC can indeed probably regulate a lot of the AI issues it mentions in this post, at least theoretically. It can also block mega-mergers, cut monopolies down to size, and do a whole bunch of other stuff that it hasn’t actually done for decades.
The FTC is understaffed and underfunded, and for 20 years has had no real political will to even carry out its core trust-busting mission. So the idea that this atrophied agency will suddenly wade into the fraught, murky waters of the fast-moving AI algo wars and bust some heads... it just seems really unlikely.
We don’t have basic standards, benchmarks, and measurement tools for evaluating AI/ML systems for “bias,” nor do we even have agreed-upon notions of “fairness” to work with in most of the applicable areas.
The blog post itself is confused in some important ways, particularly around transparency and auditing.
On the point that some of the issues covered in the post do fall under the FTC’s purview, see, How Artificial Intelligence Can Comply with the Federal Trade Commission Act, which was recommended to me by a source who follows this area of law.
There is general agreement that the authority of the Federal Trade Commission Act (the “Act”) is broad enough to govern algorithmic decision-making and other forms of artificial intelligence (“AI”).[1] Section 5(a) the Act prohibits “unfair or deceptive acts or practices in or affecting commerce” as unlawful.[2] The Federal Trade Commission (the “FTC”) is authorized to challenge such acts or practices through administrative adjudication and to promulgate regulations to address unfair or deceptive practices that occur widely by multiple parties in the market.[3]
The FTC has a department that focuses on algorithmic transparency, the Office of Technology Research and Investigation, and has requested public comment on and scheduled hearings about algorithmic decision-making and AI.[4]
The article goes on to give advice about complying with the FTC Act, but take-home here is that the agency has a pretty broad purview in this area, especially around the issues of advertising claims and credit availability that the blog post focuses on.
As for whether the agency will actually jump in and do a bunch of rule-making in this area, this seems pretty unlikely. I corresponded with one lawyer who didn’t wish to be named and who practices in the area of consumer product regulation, and he pointed me to a number of resources on this topic.
He also said, “Most federal regulatory bodies are so badly underfunded that overzealous enforcement on novel areas of law is not particularly likely,” and suggested that this blog post is extremely unlikely to be a prelude to a string of AI-related enforcement actions.
This tracks with antitrust expert Matt Stoller’s skepticism on Twitter:
Maybe the Biden admin will rebuild the FTC and beef it up. But given the agency’s track record even recently, it’s really hard to credit the idea that we’re about to see a bunch of action on machine learning.
The point above about the lack of basic standards and metrics for evaluating ML models is one I’ve raised again and again. Even internally to large companies with plenty of resources, like Facebook and Google, efforts to objectively evaluate things like datasets and model outputs for different ideas of representation and fairness are still in their infancy.
Mostly what gets cited in discussions of such benchmarking is Timnit Gebru’s original “Model Cards” effort and projects based on it. Digging through the list of papers that cite the “Model Cards” paper doesn’t turn up more than handful of efforts building something similar, and they’re all quite recent.
The even deeper problem is even in the ML literature there are so many different (and in some cases mutually incompatible) definitions of “fairness” that many papers just punt on one entirely. This paper provides a good overview from 2018, but not only has a consensus not been developed since then, but if anything “fairness” is falling out of vogue and being replaced with concepts like “justice” and “equity” (here’s one example, but there are plenty more.)
Indeed, a popular paper making the rounds of AI ethics twitter suggests jettisoning the ideal of an abstract, technical benchmark for fairness entirely:
Rethinking ethics is about undoing previous and current injustices to society’s most minoritized and empowering the under- served and systematically disadvantaged. This entails not devising ways to ‘‘debias’’ datasets or derive abstract ‘‘fairness’’ metrics but zooming out and looking at the bigger picture. Relational ethics encourages us to examine fundamental questions and unstated assumptions. This includes interrogating asymmetrical and hierarchical power dynamics, deeply ingrained social and structural inequalities, and assumptions regarding knowledge, justice, and technology itself. Ethical practice, especially with regard to algorithmic predic- tions of social outcomes, requires a fundamental rethinking of justice, fairness, and ethics above and beyond technical solutions
By the time the FTC comes up with a working definition of algorithmic fairness in any one domain, the idea of “algorithmic fairness” will probably seem pretty quaint and the activists — who’ve moved on to some new goal — will argue against it.
Finally, let’s look at the post’s text around transparency:
Embrace transparency and independence. Who discovered the racial bias in the healthcare algorithm described at PrivacyCon 2020 and later published in Science? Independent researchers spotted it by examining data provided by a large academic hospital. In other words, it was due to the transparency of that hospital and the independence of the researchers that the bias came to light. As your company develops and uses AI, think about ways to embrace transparency and independence – for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection.
There are quite a few critical ML applications where “opening your data or source code to outside inspection” is not really an option, at least not any time soon. I’m thinking of areas like drug development and industrial processes that rely heavily on trade secrets and proprietary datasets, locked down under layers of lawyering.
We’d need third-party auditing services that could be similarly encumbered to be able to look at these kinds of things. This would be expensive, and if the company being audited is paying for it then you know which way the results will go.
I also want to flag this sentence: “it was due to the transparency of that hospital and the independence of the researchers that the bias came to light.” To me, this sounds like a reason to keep your models as locked down as possible, so that no one can poke at them from the outside and discover bias — indeed, a lot of models are used in places where it’s not really necessary for anyone outside the company to know they even exist.
If the FTC were to actually start trying to rein in AI/ML — not going to happen, but bear with me for arguments’ sake — we’d see companies that could feasibly keep even the existence of their models a secret do so, and companies that couldn’t keep them a secret pay third-party auditing firms for a paper trail of stamps of approval. The late Arthur Anderson, LLP did, in fact, audit Enron regularly.
Followup on EU regulation
This thread is a good roundup of reactions to the EU draft legislation I wrote about in a previous post:
What’s interesting about this is the complete lack of attention given to the new rules’ impact on innovation, and to the dynamic I pointed out where big companies will be fine with this but startups won’t be able to navigate it.
Not to go all culture war, but this indifference to power dynamics when it comes to arcane rule sets smells familiar to me. Specifically, I’m thinking of the ways that various rapidly changing “woke” social norms — the rules around which terms to use vs which have been deemed “problematic,” and how to pronounce certain names properly, and what cultural products are approved vs. disapproved, and so on — are all only accessible to those with the education, internet connectivity, and time to invest in staying on top of these trends. As has often been pointed out by critics of such, these rules serve as status markers and sorting mechanisms for the upper classes, because only those with the right resources can navigate them.
I think this is the same dynamic at play in this AI ruleset and with GDPR — the rules separate and segregate and exclude based on who has the resources to navigate them and who does not.
Is there any proposal from the EU circulating to provide some sort of publicly funded legal services for startups to stay in compliance with these rules?