Excellent summation. I wonder if the buzz is similar to when printing presses were giving books to the masses. No more gate keepers, this really is the equity that the moralists have been talking about, right? I’m an artist and realize I have to sell myself now, not my art. My arts the vehicle to me now. Until I can be simulated of course.
So exciting to be living through this moment. I imagine it’s going to be much worse, and much better than we can predict. Human thinking be damned.
Your last paragraph is legitimately a great description of not only this moment but any tech disruptions throughout human history, and I think it'll be the description that'll be lodged in the ol' noggin now.
Gary North (and partly James B Jordan) had a theory of economic development that progressed through three phases (which they connected to the Trinity and various Biblical triads) which he applied to stages of revolution (e.g. education revolution: https://www.garynorth.com/public/18126.cfm). The stages were: (1) the oligarchic, (2) the democratic, and (3) the individualistic.
*Note: As North points out, "we lose some conceptual accuracy by transferring concepts from one discipline to another, but when no readily recognized terms exist in one discipline, imports sometimes help", so we need to remember we're dealing with analogies.*
First, we have the oligarchic stage, where the market is narrow, there's a huge disparity in the quality of goods, and the producers coalesce around guild-like institutions. Second, we have the democratic phase where new tech not only decreases the cost of making goods but also decreases the cost of distribution so that the market expands greatly, quality has a more spread out distribution, and the guilds lose out to those who can distribute cheap goods to larger swaths. It would seem like at this point that quality drops, but that's only taking into account how the rich/oligarchs see things. For the poorer folks who didn't have access to anything in the first stage, their quality goes from zero to one. And third, we have the individualistic stage, where the mass production techniques (surprisingly enough) ignite a massive knowledge curve which the competitors happily ride. Diversification begins to ramp up and the end result is somehow a synthesis of our former guild-like situation but now offered to more people.
In Balaji's terminology, it's a centralization to decentralization to re-centralization (but if you squint your eyes you can also interpret it as a decentralization to centralization to re-decentralization). The key point is that is always a bumpy ride and always surprising what pops out at the end.
The big question is: does this fundamentally change the awesomeness distribution function or does it just multiply the existing one by a constant? In one case everything changes, in the other the relative differences between individuals remain about the same
Isn’t the actual problem with kids turning in AI essays not “they’re going to be disruptors to the literate classes, yay, populism up, elites down” but rather “unmotivated kids will be able to avoid learning even the basics of composition and argument by outsourcing their class work to AI - whoops, we’ve raised a generation even more sub-literate than the last” - your essay seem to skip over this concern entirely.
Kids who want to learn to think and write essays and be literate will be able to do that, and probably on a level that previous generations could only dream of. More kids will learn more of those skills faster than ever in human history.
Kids who just want to turn in work and get a grade will "be even more sub-liberate," as you say. They are screwed and I don't have a good answer right now for how to re-motivate them. Maybe we have to return to the days of writing out everything by hand under the supervision of a proctor with no smartphone access. I don't know.
But anyway, I don't think I skipped over this. I think the above points are covered in the post. My answer is that kids who want to learn are better positioned to do so than at any other time in history, and kids who don't are also better positioned to avoid learning by faking it than at any other time in human history.
That's the thing though... School isn't about kids WANTING to learn all the time. It's about preparing them to be functioning adults with base knowledge who can choose what to do with that knowledge... There is a certain level of requirement that I feel you're ignoring. We were all kids once. If there is an easier way to do something, they're going to do it. I've seen numerous TikTok videos showing people how to defeat AI detection tools. Your prediction that someone will die by an incompetent doctor who overly relied on AI to complete med school... I don't know about any of you, but that f-ing terrifies me.
I've been thinking about this more, and I think you're probably right, yeah. I also worry about kids who want to learn but are in a competitive situation and might be pressured into using AI in ways that would short-circuit their learning just to get a grade.
So yeah, I concede I may be too optimistic here. I may write a followup with more thoughts.
My friend’s wife is an ER doctor in her 40s at a teaching hospital and she’s already terrified by the quality of new doctors because of the lowered standards and inability to take criticism. Adding in AI generated essays probably not going to help 🤣
I very much hope you’re right about providing a useful assist/boost to motivated students (seems like there’s no clear evidence yet but that’s no surprising given how new all of this is). Probably be quite a while before we can begin to determine whether it’s a useful tool or a crutch. The paper you referenced has interesting ideas but is highly speculative. Be interesting to see some actual studies comparing students who train with AI to those who don’t, and then comparing their post-AI training performance on non-AI enhanced coursework.
The unmotivated student dilemma, as you say, proctors might be the only solution and since that’s no solution at all given lack of resources, it’ll probably fall on the parents to monitor “good” vs “bad” reliance on AI. Which means the gap between kids with wealthy/motivated/educated parents and those without may widen even further. As always.
I think you make many great and valuable points about the upcoming changes, but I have one major issue with all of this.
The problem isn’t that more people are suddenly going to get good at these arts because they’re using an AI, it’s that these people punting the work to the AI instead of actually learning to do it themselves.
You made great points about using AI as a learning tool. I love that idea. Asking an AI “Explain to me how to do this equation” is *awesome*. That sort of thing will help people learn things that they never understood before. But I fear the majority will instead just say, “Solve this equation for me”.
I guess when I say that it seems a lot like the calculator problem. A lot of people can put something in a calculator without understanding the math. This is like that but with superpowers, and not just for math.
I’m not arguing we should shut it all town, like any tool there is an incredible amount of good that can be done with things like ChatGPT, but also a lot of bad, and I’m worried this is just going to further widen the divide between those who want to learn and grow for themselves, and those that just do the minimum possible and let others make their lives easy for no real personal gain except laziness.
And perhaps most importantly, just because I asked an AI to paint something cool and I share it doesn’t mean I did that. It was trained on the work of someone that learned how to paint on their own, and the AI was created by someone else too. These tools aren’t going to make everyone instantly better at anything except copying the hard work of others at nearly no cost to themselves. If I submit a story or painting I had an AI generate I should get zero credit for producing that work. Selecting it or editing it… maybe, but that’s something else.
That’s where this isn’t fair to those who have put in the time to learn their trade, the people that submit art in their own name that generated by an AI is not better than plagiarism since it wasn’t them that made it.
Like I mentioned above though, in the longer term I agree these tools can be used to enhance learning, making us humans truly better, not just better copycats.
Another great point in support of your predictions is that, the greatest chess player in the world, Magnus Carlsen, famously developed an edge by training against top-end chess AI as a youngster. These chess AIs have been widely considered to be better than human players for at least 10-15 years, and training against AIs is not seen as controversial at all. So this supports the idea that students who supplement their learning with AIs would have better outcomes.
The flip side of AI in chess is that there is also an on-going controversy in the chess world, where a top-level player perhaps used an AI in a live match against Magnus Carlsen -- a claim that has not yet been proven. Top chess experts are not even sure how such cheating could have even been performed in a tightly controlled "over-the-board" tournament setting. But that's just to say, so-called "cheating" with AI will definitely be an issue for everyone, even if it's merely allegations of cheating.
The fears of the bottom levelling up is present in the elites, but will the elites also level up? Is there a diminishing return on competency via AI with respect to yourr own baseline competency?
Up until that last section, The Lightning Storm, I was actually getting pretty angry with how nonchalant you seemed to be about this boom. Lol it has the power to do great things and finally give us the tech boom we've needed to push us into the next era, but the equal and opposite reaction to this... But no one, especially lawmakers, will take this serious until it's used to cripple a government or hurt a lot of people in some way.
Corporations seem to be all for it because it can improve many aspects of their business and generate more profit, but once it inevitably starts cutting into those profits in some way, the lobbyists will do their thing and the politicians will finally make some changes but it will obviously be in their, and their donors, best interest and not ours...
One nice way of making-lemonade is to have them use GPT to submit alternative views on the subject and have them elucidate in class what they concluded.
Excellent summation. I wonder if the buzz is similar to when printing presses were giving books to the masses. No more gate keepers, this really is the equity that the moralists have been talking about, right? I’m an artist and realize I have to sell myself now, not my art. My arts the vehicle to me now. Until I can be simulated of course.
So exciting to be living through this moment. I imagine it’s going to be much worse, and much better than we can predict. Human thinking be damned.
Your last paragraph is legitimately a great description of not only this moment but any tech disruptions throughout human history, and I think it'll be the description that'll be lodged in the ol' noggin now.
Gary North (and partly James B Jordan) had a theory of economic development that progressed through three phases (which they connected to the Trinity and various Biblical triads) which he applied to stages of revolution (e.g. education revolution: https://www.garynorth.com/public/18126.cfm). The stages were: (1) the oligarchic, (2) the democratic, and (3) the individualistic.
*Note: As North points out, "we lose some conceptual accuracy by transferring concepts from one discipline to another, but when no readily recognized terms exist in one discipline, imports sometimes help", so we need to remember we're dealing with analogies.*
First, we have the oligarchic stage, where the market is narrow, there's a huge disparity in the quality of goods, and the producers coalesce around guild-like institutions. Second, we have the democratic phase where new tech not only decreases the cost of making goods but also decreases the cost of distribution so that the market expands greatly, quality has a more spread out distribution, and the guilds lose out to those who can distribute cheap goods to larger swaths. It would seem like at this point that quality drops, but that's only taking into account how the rich/oligarchs see things. For the poorer folks who didn't have access to anything in the first stage, their quality goes from zero to one. And third, we have the individualistic stage, where the mass production techniques (surprisingly enough) ignite a massive knowledge curve which the competitors happily ride. Diversification begins to ramp up and the end result is somehow a synthesis of our former guild-like situation but now offered to more people.
In Balaji's terminology, it's a centralization to decentralization to re-centralization (but if you squint your eyes you can also interpret it as a decentralization to centralization to re-decentralization). The key point is that is always a bumpy ride and always surprising what pops out at the end.
I came across this article of yours yesterday, right before GPT4 was released, and I really enjoyed how you analyzed it. I just read this on reddit now - https://www.reddit.com/r/ChatGPT/comments/11rfkd6/after_reading_the_gpt4_research_paper_i_can_say/ - and I’m really curious of your opinion on it.
Thanks! I will take a look at this.
The big question is: does this fundamentally change the awesomeness distribution function or does it just multiply the existing one by a constant? In one case everything changes, in the other the relative differences between individuals remain about the same
Isn’t the actual problem with kids turning in AI essays not “they’re going to be disruptors to the literate classes, yay, populism up, elites down” but rather “unmotivated kids will be able to avoid learning even the basics of composition and argument by outsourcing their class work to AI - whoops, we’ve raised a generation even more sub-literate than the last” - your essay seem to skip over this concern entirely.
Kids who want to learn to think and write essays and be literate will be able to do that, and probably on a level that previous generations could only dream of. More kids will learn more of those skills faster than ever in human history.
Kids who just want to turn in work and get a grade will "be even more sub-liberate," as you say. They are screwed and I don't have a good answer right now for how to re-motivate them. Maybe we have to return to the days of writing out everything by hand under the supervision of a proctor with no smartphone access. I don't know.
But anyway, I don't think I skipped over this. I think the above points are covered in the post. My answer is that kids who want to learn are better positioned to do so than at any other time in history, and kids who don't are also better positioned to avoid learning by faking it than at any other time in human history.
That's the thing though... School isn't about kids WANTING to learn all the time. It's about preparing them to be functioning adults with base knowledge who can choose what to do with that knowledge... There is a certain level of requirement that I feel you're ignoring. We were all kids once. If there is an easier way to do something, they're going to do it. I've seen numerous TikTok videos showing people how to defeat AI detection tools. Your prediction that someone will die by an incompetent doctor who overly relied on AI to complete med school... I don't know about any of you, but that f-ing terrifies me.
I've been thinking about this more, and I think you're probably right, yeah. I also worry about kids who want to learn but are in a competitive situation and might be pressured into using AI in ways that would short-circuit their learning just to get a grade.
So yeah, I concede I may be too optimistic here. I may write a followup with more thoughts.
My friend’s wife is an ER doctor in her 40s at a teaching hospital and she’s already terrified by the quality of new doctors because of the lowered standards and inability to take criticism. Adding in AI generated essays probably not going to help 🤣
I very much hope you’re right about providing a useful assist/boost to motivated students (seems like there’s no clear evidence yet but that’s no surprising given how new all of this is). Probably be quite a while before we can begin to determine whether it’s a useful tool or a crutch. The paper you referenced has interesting ideas but is highly speculative. Be interesting to see some actual studies comparing students who train with AI to those who don’t, and then comparing their post-AI training performance on non-AI enhanced coursework.
The unmotivated student dilemma, as you say, proctors might be the only solution and since that’s no solution at all given lack of resources, it’ll probably fall on the parents to monitor “good” vs “bad” reliance on AI. Which means the gap between kids with wealthy/motivated/educated parents and those without may widen even further. As always.
I think you make many great and valuable points about the upcoming changes, but I have one major issue with all of this.
The problem isn’t that more people are suddenly going to get good at these arts because they’re using an AI, it’s that these people punting the work to the AI instead of actually learning to do it themselves.
You made great points about using AI as a learning tool. I love that idea. Asking an AI “Explain to me how to do this equation” is *awesome*. That sort of thing will help people learn things that they never understood before. But I fear the majority will instead just say, “Solve this equation for me”.
I guess when I say that it seems a lot like the calculator problem. A lot of people can put something in a calculator without understanding the math. This is like that but with superpowers, and not just for math.
I’m not arguing we should shut it all town, like any tool there is an incredible amount of good that can be done with things like ChatGPT, but also a lot of bad, and I’m worried this is just going to further widen the divide between those who want to learn and grow for themselves, and those that just do the minimum possible and let others make their lives easy for no real personal gain except laziness.
And perhaps most importantly, just because I asked an AI to paint something cool and I share it doesn’t mean I did that. It was trained on the work of someone that learned how to paint on their own, and the AI was created by someone else too. These tools aren’t going to make everyone instantly better at anything except copying the hard work of others at nearly no cost to themselves. If I submit a story or painting I had an AI generate I should get zero credit for producing that work. Selecting it or editing it… maybe, but that’s something else.
That’s where this isn’t fair to those who have put in the time to learn their trade, the people that submit art in their own name that generated by an AI is not better than plagiarism since it wasn’t them that made it.
Like I mentioned above though, in the longer term I agree these tools can be used to enhance learning, making us humans truly better, not just better copycats.
Edit: fixed an auto-‘correct’ typo.
Another great point in support of your predictions is that, the greatest chess player in the world, Magnus Carlsen, famously developed an edge by training against top-end chess AI as a youngster. These chess AIs have been widely considered to be better than human players for at least 10-15 years, and training against AIs is not seen as controversial at all. So this supports the idea that students who supplement their learning with AIs would have better outcomes.
The flip side of AI in chess is that there is also an on-going controversy in the chess world, where a top-level player perhaps used an AI in a live match against Magnus Carlsen -- a claim that has not yet been proven. Top chess experts are not even sure how such cheating could have even been performed in a tightly controlled "over-the-board" tournament setting. But that's just to say, so-called "cheating" with AI will definitely be an issue for everyone, even if it's merely allegations of cheating.
The fears of the bottom levelling up is present in the elites, but will the elites also level up? Is there a diminishing return on competency via AI with respect to yourr own baseline competency?
But, the skills needed to be elite in the new world are not the same as those needed now.
What about China?
And this article was written by chatGPT.
Up until that last section, The Lightning Storm, I was actually getting pretty angry with how nonchalant you seemed to be about this boom. Lol it has the power to do great things and finally give us the tech boom we've needed to push us into the next era, but the equal and opposite reaction to this... But no one, especially lawmakers, will take this serious until it's used to cripple a government or hurt a lot of people in some way.
Corporations seem to be all for it because it can improve many aspects of their business and generate more profit, but once it inevitably starts cutting into those profits in some way, the lobbyists will do their thing and the politicians will finally make some changes but it will obviously be in their, and their donors, best interest and not ours...
What are the skills we should favor to keep our jobs ?
Great snapshot of the zeitgeist Jon, thanks for the educational food for thought specifically.
Aren't you a friend of David Chapman? What do you make of https://betterwithout.ai ?
One nice way of making-lemonade is to have them use GPT to submit alternative views on the subject and have them elucidate in class what they concluded.