I have way too many things to write about. I’ve already reached the point where I could keep a staff busy with stories, but this is a solo newsletter (for now) so that’ll have to wait. In the meantime, until I get a staff, my plan is to regularly give overviews of stories and beats I think are worth looking at in more detail.
Today, I’m looking at the issue of reining in AI via regulation. The regulation of algorithms is an entire reporting beat all to itself, and it has a few main contours that I’ll try to trace in no particular order:
Benchmarking and standards: Before you can regulate, you have to be able to measure. We’re just now beginning to develop the tools to measure how well datasets and models do or don’t fit some kind of policy ideal. This work is fairly new and ongoing.
Difficulty and complexity: This 2016 paper proposing an FDA for algorithms (seen via this post) gives a sense of just how complex an undertaking such a thing would be. Like food and drugs, algorithms take many forms, and they’re already everywhere, and there’s years of work in just figuring out what you’re even going to regulate, much less how.
The insider/outsider problem: One of the biggest barriers to good financial regulation is the fact that the specialists who are in the best position to understand Big Finance (and therefore regulate it) are all insiders with conflicted relationships where a lot of money is at stake. The same is true in machine learning.
As I describe in more detail in the next section, leading-edge ML is still a fairly small club, and regulators do not and cannot understand it well enough to rein it in. It will take insiders who, after investing significant time and money in developing their own expertise, are willing to dedicate their precious time to either helping regulators or becoming regulators.
AI’s hype problem: AI has been notoriously overhyped in the past, and the general feeling right now even among many who should know better is that it’s still mostly snake oil and not worth worrying about. Everyone who thinks that way is about five years out-of-date, but nonetheless this is a really prevalent attitude. Nobody will get behind a big push to regulate something they take to be a bunch of hype and nonsense.
Impact on innovation: The EU is doing some amount of ML regulation, and there’s research- and startup-hindering red tape and committees that I’ve heard about but need to learn more about. This slows everything down and is not what you want because of the next two points.
The race for artificial general intelligence (AGI): We don’t know how close we are to an AGI, or if such a thing is even possible. But if it is real, then finding it may well be one of the most important moments in human history since the discovery of fire. So there’s a ton of pressure to keep pushing as fast as possible and to get there first.
The China factor*: The Chinese AI effort doesn’t have a lot of the constraints we have in the West. They can be as invasive as they like in collecting data on their citizens, so there will never be any shortage of training data. They’re also really good at building large-scale things, quickly. So if an AGI is reachable simply by throwing transistors and data at an existing neural net architecture, then they will find it.
Regulation is a weaker, slower point of control than the private sector. I’ll unpack this idea in the next section because there’s so much that goes into it.
(Regarding the point about China, there could be a kind of AI equivalent of the resource curse going on with China’s dataset free-for-all. Because they have all the data they want about people, in unlimited quantities, they don’t have to work as hard on algorithms and architectures as we do in the West. Instead of working around dataset limitations, they can always just get more data. More on that if and when I can get to it, though.)
Not a lot of love for regulation
One of the things I’ve mentioned in a few posts already is how little attention the AI ethics field gives to Washington DC. It’s not that legislative considerations are entirely absent, but that feature far less prominently in AI ethics material than you might expect.
I think there are a few things going on here, all of which merit further investigation:
First, Congress cannot move at the speed of woke. Progressive language norms change way too rapidly for even a non-gridlocked legislative body to keep up. You can actually see this happening in real-time in the AI ethics field, where a paper that was “woke” in 2016 is now guilty of the sin of “biological essentialism” in 2021.
The answer, then, is entryism — specifically, taking over corporate HR and professional guilds, so you can exert constant pressure on the entire field via Twitter pile-ons, call-outs, cancellations, and the other tools in the activist toolbox. This gets you further, faster than legislation ever could.
The second factor that makes legislation less attractive is the concentration of power in the hands of a few companies. There just aren’t many entities that have the resources to buy and maintain the kind of hardware you need to do cutting-edge machine learning work. Facebook, Google, Amazon, OpenAI, and maybe a handful of others have access to the amount of hardware and data you need to move the needle in ML right now.
This concentration means there are a few critical nodes in the entire system, and if you can capture those nodes, it’s game over. Why would you bother with Washington if you could gain power by just being in the right meetings at about five companies that represent the bulk of leading-edge AI work?
Finally, the guild of researchers doing this work at a high level is still quite small. We’re still in a period of innovation where the right individual and the right access to resources can make a major advance. This is like the early days of software, where the right handful of people came together and invented UNIX, or the WIMP paradigm (window, icons, menus, pointer).
The AI equivalents of UNIX and WIMP are just now — as in since maybe 2016 or so — being invented. That’s where we’re at, so despite the fact that everyone on Clubhouse is hustling some kind of AI or ML in their bio, the actual club of people who can push things forward is still small.
The banking analogy
This analogy is still a work-in-progress, but I find myself reaching for it every time I talk with someone about this problem of regulating algorithms: the problem of regulating AI/ML is a lot like the problem of regulating the financial system.
The first point of congruence between AI and banking is the aforementioned insider/outsider problem. I already touched on this in the first section, so I won’t repeat that. I’ll just extend that material by noting that right now, the field is in a really exciting place with a lot going on, and the pace of change is rapid.
So, big salaries aside, it will be hard to find true insiders willing to go into regulation, or even to sprinkle time on regulation. I’m sure there’s no shortage of academics whose knowledge is ~3 years out-of-date who think they’re up-to-speed and ready to talk regulation, but that’s not the same as having real insiders at the helm.
The second way banking and AI look alike to me, is in the area of scale and leverage. I have this idea, which I’ve written about previously and would love to develop further at some point, that some technologies are a form of leverage.
Networked computing technologies, for instance, are like a lever in that they act as force multipliers. A few engineers or marketers can shift the behavior of billions of people with the force of a few keystrokes, so cloud tech is a lever in the Archimedean sense (“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world”).
AI is similar, in that it’s a force multiplier that lets a few people in the right position do big things at scale. Examples would be everything from a general commanding a drone army, to an AI-powered hedge fund that moves markets.
Tech is like leverage in the sense of debt, too. It piles on risks of various kinds as it scales, both for the party that’s depending on it and for the larger system itself.
In both finance and tech, it’s possible to find simple numbers (often used by businesses themselves) that you can make rules about, in order to impose constraints. Such simple numbers are effective to target and difficult to game. For example, as complex as banking is, a bank can be limited in size (and made safer) by a simple cap on its tier one capital ratio.
Similarly, if I were trying to limit the size of a consumer-facing tech platform, I’d look at the size of the users
table. Every tech platform in existence knows exactly how many users it has at any moment, and what state those users are in (active, inactive, churned, etc.). So that’s a number a regulator could focus on if they’re looking at size. I don’t know what simple numbers lurk in the world of machine learning (maybe around some measure of model size), but I’m certain they will emerge in due time.
Following on all of the above, I’d like to offer the following hypothesis: as a technology, machine learning is safer right now because the barriers to entry are so high (high up-front capex, and rare expertise); if and when that changes, we’re likely to face the AI equivalent of a series of Wildcat banking crises.
Just like in banking, I suspect it’s going to take a major crisis (or a series of them) to create a demand for real regulation.
An SEC for algorithms
I strongly prefer my finance analogy for AI over the standard food-and-drug analogy. So I really think the entire AI regulation discussion should be structured along these lines because all the regulatory lessons we’ll need to learn if we’re going to make this work are in the world of banking, trading, and markets.
Agencies like the FDA and OSHA may be liberal darlings, and the state of our financial markets a complete mess, but I’m nonetheless convinced that markets are a far better model for the fast-moving, competitive world of AI.
Ultimately, AI isn’t a product. It’s a diverse set of technologies that go into products, and into the systems that make the products, and into the systems that make the systems that make the products, and so on. So any attempt to regulate it like a packaged consumer good will fall far short and do more harm than good.
You have to regulate it like the advanced, esoteric, economically critical set of practices and technologies that it increasingly is. And that calls for some kind of finance-style regulation, and not consumer product safety regulation.
Beyond the obvious, standard arguments of slowing down innovation, regulatory capture, etc. (or perhaps as instances of those), what do you think *poor* regulation of AI could lead to? This question could include both what you think are the most likely bad regulations we could see and what the consequences of those would be.
Good thoughtful piece - The point about regulation needing to be more like finance and less like the FDA seems key, but the concerns of regulatory capture are also clear.