How To Regulate AI, If You Must
The rules are definitely coming, so let's make sure they lead to a future we want.
AI is both extremely powerful and entirely new to the human experience. Its power means that we are definitely going to make rules about it, and its novelty means those rules will initially be of the “fighting the last war” variety and we will mostly regret them.
While we do not get to pick whether or not AI rules will exist (a certainty) or whether our first, clumsy stab at them will be a backward-looking, misguided net negative for humanity (also certain), the news isn’t all bad. In fact, we’re in a moment of incredible opportunity that comes around rarely in human history: those of us building in and writing about AI right now get to set the terms of the unfolding multi-generational debate over what this new thing is and what it should be.
But as much as I’d like to ramble on about LLMs as a type of can-opener problem, and explore what it would look like to develop a new companion discipline to hermeneutics that’s aimed at theorizing about text generation, the rule-making around AI has already started in earnest, and I am by and large not a fan of the people who are making the rules and I am not expecting good results.
⚔️ So this post is aimed at people who, like me, are eyeing most of the would-be AI rule-makers with extreme suspicion and a sense that they are up to no good. Those of us who are aligned on the answers to some key questions around technology, society, and human flourishing must immediately start talking about how we can wrest control of the rule-making process from the safety-industrial complex that is already dominating it.
Who is and isn’t making the AI laws
Notice how in the opener to this post, I said “rules” and not “laws.” In the network era, there are many more effective ways to enact rules that govern the lives of billions than the old paradigm of governments passing laws. Moderation rules, terms of service, and the private arrangements that structure network architectures are examples of rules that touch more people on a visible, day-to-day level than most laws on the books. The more generalized term here is governance, a term that covers all the different kinds of rules at different layers of the stack.
So as I said, we’re going to have rules about AI, but many of those rules may not take the form of laws passed by legacy nation-states. But I do want to focus narrowly on the “laws” part of the picture because I think this is the type of AI governance that’s in the most danger of going seriously sideways in short order.
🤦♂️ Why do I think that our legal process is about to mangle this whole AI thing? Two reasons:
Who is working on the problem of AI governance right now
Who is not working on the problem of AI governance right now
Who is working on AI governance
🚔 Per my contacts in policy circles and my own observation in my network, the burgeoning AI policy debate is presently dominated by the same safety-industrial complex that has come to dominate platform governance conversations in the social media era.
I’ll say more about this group’s tactics below in the section on social engineering, but my point here is that this safety-industrial complex was already fully mature and dug in at large companies and in universities and policy shops when AI cropped up as an apparently adjacent issue that they could all seamlessly expand their “watchdog” franchise into. And expand it they have, with the result that everyone (like myself) who is opposed to this safeyist network will have to scramble to catch up to it if we want to save AI from it.
In the US, my impression is that the two main camps in the safety-industrial complex are what I’ve previously called the X-riskers and the language police. Europe appears to be more dominated by Chernobylists.
Here’s my earlier piece’s characterization of these camps:
The language police: Worried that LLMs will say mean words, be used to spread disinformation, or be used for phishing attempts or other social manipulation on a large scale. AI ethicist Gary Marcus is in this camp, as are most “disinfo” and DEI advocacy types in the media and academia who are not deep into AI professionally but are opining about it.
The Chernobylists: Worried about what will happen if we hook ML models we don’t fully understand to real-life systems, especially critical ones or ones with weapons on them. David Chapman is in this camp, as am I.
The x-riskers: Absolutely convinced that the moment an AGI comes on the scene, humanity is doomed. Eliezer Yudkowsky is the most prominent person in this camp, but there are many others in rationalist and EA circles who fall into it.
I think the above description is still pretty accurate, but it is interesting to me to see the difference in who dominates the debate in which part of the world. There’s something about the X-risk and language police camps that feel particularly American, to me, so it sort of scans that those are our main options while the Europeans are taking a more sensible (IMO) “product safety” approach — but more on that in a moment.
Who is not working on AI governance?
In the US, every industry has a trade association that typically lobbies Congress for favorable regulatory treatment. From advertisers to builders to Catholic colleges and universities — you name it, there’s a trade association for it.
Except for AI. When it comes to AI, we have the exceedingly bizarre spectacle of prominent industry figures approaching Congress and asking for some kind of regulation, all without any apparent coordinating or governing body that speaks for the industry as a whole.
There is no trade group that most AI-focused funds and startups belong to and that is tirelessly working to ensure that startups can do basic things like buy or rent GPUs, train foundation models, launch new products based on new foundation models, and generally operate and iterate without an army of lawyers approving every code deploy.
This is weird and bad, and it must be remedied ASAP.
How might we think about AI laws?
Here are the frameworks that I see emerging for AI regs, frameworks that map pretty directly onto the aforementioned three main AI safety camps:
Existential risk: This is the bucket things like nuclear weapons technology goes under.
Product safety: This is where the bulk of industry regulation has historically lived both in the US and Europe, and includes things like seatbelt laws, building standards, and other regulations meant to keep consumers physically safe.
Social engineering: This category contains laws like the Community Reinvestment Act, various affirmative action laws, and other laws aimed at changing society in a certain way by engineering certain types of outcomes.
I’ll take the last two in reverse order, ignoring x-risks entirely because the Europeans are ignoring that issue and that is great. I wish we could ignore them here in the US, but unfortunately, we can’t. We still have to fight them. (First, we fight the doomers, then we laugh at them, then we ignore them, then we win.) I have fought them in other articles, though, so I refer you to those.
Social engineering
There’s a lot of potential slippage between categories two and three, especially in the post-COVID era. It’s worth unpacking why this is the case.
Threat inflation is a core tactic of the safety-industrial complex, and this is mainly accomplished by discovering new types of “harms” that can be framed as “violence” or otherwise “trauma-inducing” and therefore placed under the banner of “safety.” This threat inflation has the effect of raising the status (and typically the funding) of people doing the “harms” or “violence” identification, and it also gives them more leverage in arguments by raising the stakes so that literal lives are on the line because whatever dispute we’re trying to adjudicate is really a life-or-death struggle between a clear hero and a clear villain.
The end result of threat inflation is that anyone trying to enforce a distinction between physical harm or violence and psychological harm or efforts to address historical inequities (which are said to lead directly to physical harm) does so in the face of increasing opposition from the safety-industrial complex.
✋ But I think we have to insist on this line so that we can actually find a reasonable basis for doing basic product safety regulation, because if every product safety regulation discussion turns into a shouting match over “equity” then we will end up with the worst possible outcome, i.e., no actual product safety but a thicket of tyrannical, dysfunctional rules that makes a few activists and consultants happy and everyone else miserable. (The term “anarcho-tyranny” is relevant, here.)
I should note that I’m not actually opposed to social engineering — I’m just going to insist that when we do it, it’s under two conditions:
It’s clearly marked as social engineering, and is not conflated with “safety.”
It’s in the service of engineering the kinds of outcomes I want and not the kinds of outcomes my culture war opponents want. For instance, social engineering that protects kids from algorithmic manipulation is good, as is pro-natalist social engineering that encourages families to stay together and to have children.
So if in the name of “progress,” you want to require AI models to promote some specific vision of how society should be ordered that is different from the way it is presently ordered, I am gonna give that the big old Chad “Yes” because I have my own opinions about how everything should go and if I’m going to have to hear yours then you’re going to have to hear mine.
If there’s social engineering to be done, then I, my co-religionists, and anyone willing to make common cause with us are going to form a coalition and use every means at our disposal to ensure that our values and mores are the ones enshrined in the models that everyone else has to use.
I actually tend to think that social engineering efforts should be confined to the extralegal parts of the governance stack, i.e., terms of service, moderation, and the like. The one place it seems obvious to me that the law should play a role in social engineering is in ensuring that each group has a representative on the social engineering committee.
Here’s the TL;DR of this section:
Social engineering is good, actually.
When we do social engineering, we have to be clear that this is what we’re doing, and that it’s different from product safety.
If we’re doing social engineering, then the law should ensure that all the stakeholders must be represented. Not just technocrats from a certain set of schools and institutions, but everybody, including many groups that the legacy media and the SPLC are trying really hard to unperson.
In cases there’s only one widely deployed model and one possible set of socially engineered outcomes that this model can be tuned for, then you should expect me to stop at nothing to ensure that we to tune it for my preferred outcomes and not yours. If this attitude shocks you, then you should ask yourself why you were expecting me to just roll over and let you have your way.
My instinct is that we should prefer to do social engineering via governance methods that don’t require passing laws but that are overseen by the law. I may change my mind on this, though, as I think about it more.
Product safety
I’ve written quite a bit recently on the necessity of treating AI as a software tool and not as an agent or a coworker. I harp on this distinction for a practical reason: the tool framework for AI naturally lends itself to a product safety-based governance regime, and the coworker framework naturally lends itself to a social engineering-based governance regime, and I prefer the former to the latter.
The essence of the “product safety” approach is to stay away from making too many rules about “AI” in the abstract, or even about foundation models, and to focus on validating specific implementations of machine learning models. In other words, focus on the products and services that are in the market, not the tech that is in the lab.
The concept of scope that I’ve previously written about applies here.
So in this example, I’m taking into account the specific use case to which I plan to put the model, and then trying to adapt the model so that its performance in that use case meets my needs. I’ve defined a project scope, I’ve developed a solution, and I’ve validated that solution based on some predefined acceptance criteria.
With this in mind, I was encouraged to learn that the Europeans are taking a product safety approach with their recently announced EU AI Act. I hosted a Twitter space on this with two of the people working on this law, and it left me feeling a lot less panicked about the state of AI law in the EU.
🦺 It sounds like the Europeans are going to focus their rule-making efforts on specific AI implementations, not so much on the underlying tech. This is good because the right way to regulate AI under a product safety regime is not to regulate “AI” in the abstract but to regulate specific products, in the same way we already do. Is the product itself safe and does it do what it’s supposed to do when it’s being used the way it was designed to be used? If the answer to both those questions is “yes,” then who cares what state the underlying model is in?
As I said in the Twitter Space if a model is being used in the very narrow context of, say, product support, it’s properly sandboxed so that users can’t interact with it on out-of-scope topics, then what does it matter if the model is on one side or the other of some hot-button issue? The answer is that it shouldn’t matter to anyone who isn’t trying to do backdoor social engineering by trying to limit the market for “problematic” models.
On a practical level, a product safety approach to AI regulation would mainly consist of updating existing product safety laws to take into account possible ML integrations.
And sure, we can worry a little about out-of-scope uses for products, but worrying too much is bad. If you murder someone with a kitchen knife, that is an out-of-scope use that in America (in contrast to the UK) we don’t tend to try and address with regulation. It’s good that we treat kitchen knives this way in America, and we should treat models this way, as well. I hope the Europeans adopt this approach to AI (and to kitchen knives).
🤝 One thing it does sound like the EU is worried about with the AI Act is industry capture. They’ve apparently learned some hard lessons from GDPR, and the fact that this legislation favors deep-pocketed incumbents who can hire armies of compliance lawyers was brought up repeatedly in the space as an example of what the EU wants to avoid with AI. Again, this is encouraging.
The US should definitely adopt this approach from the EU. Regulatory capture should come up constantly at AI-related congressional hearings. Unfortunately, I’ve yet to hear the term brought up by lawmakers in the hearings I’ve watched, though I have heard a lot from them about China, bias, and other hot topics.
The other positive thing about the EU approach that’s worth imitating is the specific carve-outs for open-source models. The Eurocrats seem to be bullish on open-source AI, and as well they should be because it’s Europe’s best hope for transitioning to a world that’s no longer dominated by US-based Big Tech platforms.
Other product safety laws that might be good
If I were to tweak my libertarian readers by proposing some laws that might place positive obligations on companies, then in the spirit of transparency I might suggest we require product makers to disclose where they’re using ML in their products, and to what end.
🪪 I’d also suggest that Congress find ways to support the development of open-source, interoperable licensure and credentialing protocols for data labelers and RLHF preference model trainers. The idea here is that the public should be able to see who is catechizing the bots they’re using, and what those bot trainers’ backgrounds, credentials, and values are. We can do this in either a maximally privacy-preserving way with crypto or a minimally privacy-preserving way with a network of centralized authorities.
These product safety ideas aren’t the only thing we should be doing as far as AI governance. We also should do things that are outside the “safety” framework entirely but that will still make upstream contributions to AI safety efforts. For instance, we should:
Create a positive right to buy/rent GPUs and train foundation models.
Follow Japan’s lead and declare all copyrighted material available for model training.
Support the development of control surfaces (RLHF, soft prompting, editing correlations and concepts out of models) that make it easier to fit models to specific applications.
Support explainability research.
Support the development of deployment environments that sandbox the model in such a way that its inputs and outputs are narrowly constrained at can easily be scoped to a specific application, with the user being unable to use the model out-of-scope.
It’s time to organize
Silicon Valley in general, and especially startups, don’t like to really think about Washington DC until something happens that brings regulations down on them.
But in the case of AI, that something has already happened: OpenAI’s Sam Altman has gone to Congress and invited that body to regulate this technology (with his input and on his terms, of course). So generative AI is already not going to play out like crypto, where the technology had over a decade to get big enough for regulators to care about.
It’s also the case that “AI” is the slowest-moving sudden revolution anyone has ever seen. The big breakthrough moment for this tech, which some would put at ChatGPT but I personally would put earlier at the public launch of DALL-E, was decades in the making even if all the pieces haven’t come together fully until the last two years.
My point: Those of us investing and building in AI need to start acting like we’re in a mature industry that everyone can plainly see is a Very Big Deal and is imminently coming under regulatory scrutiny. Whatever off-the-radar “move fast and break things” grace period AI had is over.
✊ Those of us who are optimistic about AI and who don’t want to see this new technology suffocated in the cradle by self-appointed safety tzars have a lot of catching up to do. It’s time we started to organize and maybe even produce some open letters of our own.
As the anti-safety-industrial-complex coalition develops, I’ll keep my readers apprised of next steps and concrete ways to get involved.
Follow the money, follow the money, follow the money. All else is theater, kabuki, kayfabe, posturing, virtue-signaling - whatever your favorite term. There is not going to be any licensing of GPU's or of model-training or anything like that (at least in US and Europe), because that is a business matter. Europe does still actually have some democracy in terms of control over money, so they are dealing with regulations of business uses. The US doesn't have that, so the only regulatory battle is payment between data-suppliers vs the data-users, and everything else is a sideshow. I'm starting to think OpenAI was very clever, since they started the media circus themselves, in a way which insured they'd get excellent press. But that's just pundit-fodder, it will never amount to any action.