Twilight of the AI nerds
When your geeky scene gets invaded by big money, beautiful climbers, & current controversies
Complaining about the state of the AI/ML field is officially A Thing on the largest machine learning subreddit (1.9 million subscribers). Take a look at these threads:
[D] Why has machine learning become such a toxic field, know-it-all field?
[D] The Rants of an experienced engineer who glimpsed into AI Academia (Briefly)
[D] Some interesting observations about machine learning publication practices from an outsider
Here are two papers on the problems, mirroring many of the Reddit complaints:
In general, most of the complaints in these threads and papers are what it looks like when your scene gets discovered — when the money floods in, and the cool kids and ambitious climbers come chasing the money.
But there are other things going on with the field that are intrinsic to the subject matter, itself. Finally, there are some problems that are happening everywhere and are inseparable from this cultural moment.
Let’s take a look at the complaints.
Mo’ money, mo’ problems
I think you can draw a direct correlation between the growth of most of what you’ll read in the above, the amount of investment money flowing into AI/ML, and the degree to which the people in the promo materials for AI/ML conferences and workshops have attractive faces, great hair, and professionally done headshots.
I’ve seen this happen before with the PC tech scene and tech in general. I’m so old that I remember when computers were still a nerd thing — back before you drove them around in the shape of a car, or carried them in your pocket.
I remember the initial arrival of the cool kids on the tech website scene. I’m speaking of the gadget bloggers, who were the first wave of popular, conventionally attractive young people who were in the tech scene but weren’t actually technical in any meaningful way. It was really weird for those of us who got our starts writing about overclocking the Intel Celeron to watch bona fide cool-kid cliques form up right before our eyes.
When this happens, old heads complain that the scene used to be smaller and more collegial. Everyone knew each other. Sure, there were a few jerks and creeps back in the day, but everyone knew who they were and to stay away from them. There were no celebrities who had a real social footprint outside the scene itself. But now there are a bunch of people calling the shots whose primary skill seems to be getting famous (vs. people who were good at whatever the original scene was about).
Easy money is also probably behind the complaints of “gatekeeping.” From digging around in the threads, it seems to me that what’s meant by this term is not just traditional institutional gatekeeping, but more like old-fashioned cliquishness.
Some of this, as the comment above alleges, is pure cronyism. But there’s also another driver of this, which I’ve seen people admit to in some of the comments on the threads listed above: using institutional affiliation as a first-order filter because the present volume of papers in the field is so high that it forces you to pick some kind of strategy.
The money flooding into the field is increasing the number of people in it who want to add publications to their CV, so everyone is publishing everything they can. Often these publications are formulaic and fit certain patterns, which are described in some of the comments I’ve seen. These patterns often involve a “wall of math” (another common complaint) seemingly intended to cover up the other shortcomings of the paper.
I’ve heard complaints about the increased volume of publications from everyone I’ve talked to in this space in the past few months. There’s just too much going on for anyone to keep track of. So it makes sense that reviewers and conference session organizers are now heavily leaning on institutional affiliation to filter submissions.
I occasionally run across efforts aimed at doing automated filtering and sorting of papers (because of course ML will get thrown at the problem of too many ML papers), but these are just going to turn the field into another instance of the Everything Game.
Explainability problems
Another complaint that shows up in some of the threads, and that I find really interesting, is that there are now many things going on in ML that no one knows how to explain.
The kinds of utterances described above are common enough in the natural sciences and humanities, but they’re not the kind of thing you expect from a computer science course. I did an undergrad in computer engineering, and back in 1998 I took courses in neural networks and computer vision, and I’m pretty sure I never once heard anyone say we really didn’t know the “how” or the “why” behind some of the things we were learning.
Because the software is now complex enough that it mimics natural phenomena, you end up with mysteries of the sort that are common in other disciplines but not so much in computer science. This also means that experiment-driven approaches from other disciplines are being applied to ML.
I talked to someone working in AI explainability at one of the top schools for this, and they told me that they do a lot of experimentation to figure out what parts of a deep learning network are doing what kinds of things. They’ll formulate a hypothesis, and then probe the network’s behavior under different conditions to learn what’s going on.
But of course, the big difference between AI/ML and the natural sciences is that humans are designing and building the systems that humans are also unable to explain. So there’s a peculiar feedback dynamic here that I don’t think is present in other domains.
Contrary to the presently popular trend of reading everything through the lens of James C. Scott’s Seeing Like a State, I don’t think the study of human institutions offers much guidance in this area because human institutions are made out of humans. In contrast, the building blocks of AI are all quite deterministic and explainable.
The woke wars
One common complaint in the threads at the top of this article is “the ethics people”— no so much their scholarship, but their behavior. It turns out that some people really don’t like being told that merely disagreeing with someone over a technical point while having the wrong profile picture is evidence that they’re racist, privileged, trying to erase marginalized voices, and so on. Who knew people didn’t like to be stereotyped based on accidents of birth!
While the toxicity of the ethics crowd on Twitter, Reddit, private listserves, and workplace venues is a hot topic to the “ML sux now” complainers, I also strongly suspect these folks are not fans of the actual papers and other work products coming from the ethicists; it’s just easier and safer to complain about tone rather than content.
But at some point, you have to be willing to stand up and tell the crusaders: hey, this work is not good — it’s poorly written, trades on stereotypes, contains very little in the way of falsifiable claims, offers no solutions (other than “burn it all down”), and is not welcome in this field. Please revise.
I’ve also talked to a number of ML practitioners who think this ethics stuff is all just a passing fad that will go away when the fever breaks. I always tell them, not a chance. There’s no field that’s safe from this set of ideas, behaviors, and practices, and it never passes until the community/company/institution/scene gets overwhelmed with it and implodes.
Applied ML is where the action is! Some of the replies to the reddit threads mention this and i second it! I work in ag robotics and its a very exciting place right now. It feels as if you can make a big difference in labor reduction and chemical use reduction. Mostly i am doing machine vision which seems very mature right now. I have no need to improve it for our applications. The current ML vision algorithms like resnet50 works quite well. It is nice to be in an application where the current state of the art works quite well and you dont have to get into the last nines in accuracy.