I had an at-least-for-me-interesting interaction with Tom Spencer-Smith on linkedin. You MIGHT remember Tom who was a contractor for Respawn when you were still there (cries inside) who did thin-client development. Basically, how can we automate more of our testing, especially of gameplay, to find the sort of weird edge cases that are the result of sheer volume. He's pretty much all in on AI and on things that are, in many ways, "generic." I.e., he likes unreal because it's very generic which is also why I don't really like it, because it's generic. I think we view unreal the same way, just with him thinking it's good and me thinking it's not good, though in both cases "good" applies to our own specific use-cases. i.e., I am glad unreal exists, but i am also glad i don't work on a game built on top of it.
anyway, re: AI, he said something kind of akin to Carmack, but much more specific and prescriptive in terms of how to actually write code that AI could then 1) understand and 2) expand upon. Basically, he argued, "write a lot of small functions" because AI is better at understanding things the more discrete they are.
And my response to this is basically, "so, treat it like a compiler."
Which I thought was, patting myself on the back, somewhat insightful. And I think that's really what Carmack is saying. He started out literally writing assembly language. That's now been made obsolete by modern high level languages.
And AI is *potentially* just like this. AI is to Python what Python is to C++ and what C++ is to x86 assembly. This is always what gets me about C++ developers who can be a bit (quite) snooty because they are *closer* to the bare metal than a python or javascript developer. This was most definitely a real thing at respawn. Which is not to trivialize the importance of actually understanding what is happening on the bare metal, or that somehow the various depth of a level of abstraction is irrelevant. It isn't. But like, it's also weird. I don't see the same kind of gatekeeping in other areas, though admittedly I don't know what I don't know. But I can't imagine a linguist taking Faulkner to task because he doesn't understand the actual way in which humans learn and process language. That doesn't diminish his writing. And likewise, I can't imagine a neuroscientist saying that the work of a linguist or a writer is somehow less valuable because they don't understand how language actually makes its way along synapses.
Last thought - per Carmack's "...don't use power tools..." is on the luddites and what they actually believed. They weren't, as commonly depicted, anti-tech. They were pro-worker's rights. I.e., what they objected to was that the machines were unsafe. And I suspect that's where the real discussion is to be had but it's currently wrapped too much in doomerism. Some of that is, IMO, justified here because of the way these models were "trained." I think the training of these models is an important part of the discussion and you see this in the big court cases where OpenAI and others are like, "we had to use copyrighted information." And I think that is new. I don't know if we've ever had a new technology that has been so reliant on *involuntary* use of other people's work. And I think that's a big part of why we can't have more nuanced discussion here. And, to me, it is the closest analog we have to the luddite's objections around safety.
THANK YOU. The discourse around AI in gaming is frustrating and exhausting to no end. I hate how the most extreme, unhinged, and self-righteous voices utterly dominate the discussion. The backlash over The Alters and Expedition 33 was frankly ridiculous. It’s unnuanced, uninformed, mob mentality and reeks of virtue signaling.
I’m personally quite excited about the possibilities of generative AI for game dev, but mostly keep it to myself because I *know* exactly what the reaction will be.
In a lot of ways, the AI push in gaming is a progression of the medium’s interest in procedural generation.
Some of the medium’s most beloved games, like Minecraft, rely on procedurally generated level design. But no one complains this is wiping out level designer jobs.
Apply this same logic to programming and narrative designers. They aren’t less important. AI still needs a competent designer to make sure it generates a compelling vision.
Great post. The discourse around AI has been really frustrating and I'm sure it comes from a place of fear and uncertainty. I don't think the big tech companies are doing a great job at selling a vision of what could be. The demos in every area are often tone deaf. The constant pushing of the AGI vision of the future feels fuzzy at best and terrifying at worst. It's going to be an uncomfortable few years.
This is a good, balanced discussion. I'm not sure what to think about AI because of there being too many opinions, and too many of the vocal on either end of the extremes. Right now, I suppose using AI for tools is OK since I do use them for my software developer job. I don't blindly use them though, I simply ask for help (usually on a specific technology I've never used before), and I have to double check the result and adapt it to fit my needs. But I have to know up front what I want to achieve, it won't creatively do everything for me. Tools to help here and there during the development of a game would be similar too I think.
The one area I don't like is when people/companies go to the extreme end and think AI will build everything for them. In those cases all I can see are the dollar signs floating above their heads, clearly wanting to make the most money with as little effort as possible. And those cases always end up clearly being artificial, there is always something "off" about them.
I also worry about the psychology of people at this point. I'm already seeing it in my youngest daughter's generation (8 years old), where she doesn't really want to think for herself, she just wants to know something immediately. Like telling the time - why learn how to do that when she can just ask Alexa? Why learn maths and do homework if computers can do it for you? I'm always trying to teach her to think through problems, but she hasn't got the patience to do it, the world around her wants her to consume everything right now all at once. AI is now one more thing that encourages that. They will one day be the future workforce, so what kind of things will they create?
The power tools analogy is a good one to keep in mind, the emphasis being on tool. Tools don't work by themselves, they still require a human handler. I think the hype has led to an incomplete understanding of these tools, and premature action (like CEOs saying they can replace their workforce with "AI") that has hurt the conversation. There are safety and regulatory things that we need to figure out, but jumping straight to a dystopian future can hinder our progress.
Holding a conversation about AI can be very frustrating because most people involved frame the topic through whatever lens they've already settled on. And I find people tend to view it only as it is now, not what it will or could be. Just this morning a fellow software developer was disparaging its capabilities since a 6-yr old child can outperform it, yet I find it amazing that we have technology that even warrants the comparison.
Concerning world models and Genie 3, and I could write for days, but one component of a future I can imagine is using it to 'personalize' media (films/games/music) in ways that better suit our individual needs (someone will produce a version of 'The Phantom Menace' without Jar Jar Binks). These would of course be shared with others like mods are already (cue copyright infringement discussion). It may well be that the 'definitive' version of a piece of media will be generated by the end users.
Whenever I see something like Genie 3, I'm impressed that it *exists*. Like, it's an impressive feat of software engineering. But that's all I can really say about it.
Do I think this is the first step towards building The Matrix? No, because we're already hitting the technical limits of how far machine learning can go without infinite memory. Could this be a fun video game? Probably not, you can already do text-based adventures with LLMs and they're terrible substitutes for an RPG video game or a tabletop dungeon master. Could I use this to make game prototypes? Probably not, because prompting doesn't give you the degree of control you need to make new gameplay mechanics.
We're 3 years into the "AI revolution". Enough talk about what's *possible in the future*, we need to see something actually useful enough to justify the billions of dollars of R&D spent.
Tim Sweeney posted a tweet arguing something along the lines that the real magic will come when world model tech can be integrated with existing models. I’ll give the Unreal gang another 12 months before passing judgment on whether or not that’s realistic.
I had an at-least-for-me-interesting interaction with Tom Spencer-Smith on linkedin. You MIGHT remember Tom who was a contractor for Respawn when you were still there (cries inside) who did thin-client development. Basically, how can we automate more of our testing, especially of gameplay, to find the sort of weird edge cases that are the result of sheer volume. He's pretty much all in on AI and on things that are, in many ways, "generic." I.e., he likes unreal because it's very generic which is also why I don't really like it, because it's generic. I think we view unreal the same way, just with him thinking it's good and me thinking it's not good, though in both cases "good" applies to our own specific use-cases. i.e., I am glad unreal exists, but i am also glad i don't work on a game built on top of it.
anyway, re: AI, he said something kind of akin to Carmack, but much more specific and prescriptive in terms of how to actually write code that AI could then 1) understand and 2) expand upon. Basically, he argued, "write a lot of small functions" because AI is better at understanding things the more discrete they are.
And my response to this is basically, "so, treat it like a compiler."
Which I thought was, patting myself on the back, somewhat insightful. And I think that's really what Carmack is saying. He started out literally writing assembly language. That's now been made obsolete by modern high level languages.
And AI is *potentially* just like this. AI is to Python what Python is to C++ and what C++ is to x86 assembly. This is always what gets me about C++ developers who can be a bit (quite) snooty because they are *closer* to the bare metal than a python or javascript developer. This was most definitely a real thing at respawn. Which is not to trivialize the importance of actually understanding what is happening on the bare metal, or that somehow the various depth of a level of abstraction is irrelevant. It isn't. But like, it's also weird. I don't see the same kind of gatekeeping in other areas, though admittedly I don't know what I don't know. But I can't imagine a linguist taking Faulkner to task because he doesn't understand the actual way in which humans learn and process language. That doesn't diminish his writing. And likewise, I can't imagine a neuroscientist saying that the work of a linguist or a writer is somehow less valuable because they don't understand how language actually makes its way along synapses.
Last thought - per Carmack's "...don't use power tools..." is on the luddites and what they actually believed. They weren't, as commonly depicted, anti-tech. They were pro-worker's rights. I.e., what they objected to was that the machines were unsafe. And I suspect that's where the real discussion is to be had but it's currently wrapped too much in doomerism. Some of that is, IMO, justified here because of the way these models were "trained." I think the training of these models is an important part of the discussion and you see this in the big court cases where OpenAI and others are like, "we had to use copyrighted information." And I think that is new. I don't know if we've ever had a new technology that has been so reliant on *involuntary* use of other people's work. And I think that's a big part of why we can't have more nuanced discussion here. And, to me, it is the closest analog we have to the luddite's objections around safety.
Thoughtful reply as always, thanks Jordan!
THANK YOU. The discourse around AI in gaming is frustrating and exhausting to no end. I hate how the most extreme, unhinged, and self-righteous voices utterly dominate the discussion. The backlash over The Alters and Expedition 33 was frankly ridiculous. It’s unnuanced, uninformed, mob mentality and reeks of virtue signaling.
I’m personally quite excited about the possibilities of generative AI for game dev, but mostly keep it to myself because I *know* exactly what the reaction will be.
In a lot of ways, the AI push in gaming is a progression of the medium’s interest in procedural generation.
Some of the medium’s most beloved games, like Minecraft, rely on procedurally generated level design. But no one complains this is wiping out level designer jobs.
Apply this same logic to programming and narrative designers. They aren’t less important. AI still needs a competent designer to make sure it generates a compelling vision.
Great post. The discourse around AI has been really frustrating and I'm sure it comes from a place of fear and uncertainty. I don't think the big tech companies are doing a great job at selling a vision of what could be. The demos in every area are often tone deaf. The constant pushing of the AGI vision of the future feels fuzzy at best and terrifying at worst. It's going to be an uncomfortable few years.
This is a good, balanced discussion. I'm not sure what to think about AI because of there being too many opinions, and too many of the vocal on either end of the extremes. Right now, I suppose using AI for tools is OK since I do use them for my software developer job. I don't blindly use them though, I simply ask for help (usually on a specific technology I've never used before), and I have to double check the result and adapt it to fit my needs. But I have to know up front what I want to achieve, it won't creatively do everything for me. Tools to help here and there during the development of a game would be similar too I think.
The one area I don't like is when people/companies go to the extreme end and think AI will build everything for them. In those cases all I can see are the dollar signs floating above their heads, clearly wanting to make the most money with as little effort as possible. And those cases always end up clearly being artificial, there is always something "off" about them.
I also worry about the psychology of people at this point. I'm already seeing it in my youngest daughter's generation (8 years old), where she doesn't really want to think for herself, she just wants to know something immediately. Like telling the time - why learn how to do that when she can just ask Alexa? Why learn maths and do homework if computers can do it for you? I'm always trying to teach her to think through problems, but she hasn't got the patience to do it, the world around her wants her to consume everything right now all at once. AI is now one more thing that encourages that. They will one day be the future workforce, so what kind of things will they create?
Thank you for writing this Ryan!
The power tools analogy is a good one to keep in mind, the emphasis being on tool. Tools don't work by themselves, they still require a human handler. I think the hype has led to an incomplete understanding of these tools, and premature action (like CEOs saying they can replace their workforce with "AI") that has hurt the conversation. There are safety and regulatory things that we need to figure out, but jumping straight to a dystopian future can hinder our progress.
Important post!
I write my newsletter in Hungarian, but I've been advocating for the view that AI is a new medium.
Here's the CEO of Runway writing the same that I think is a useful lens for game developers: https://cvalenzuelab.com/anewmedium
He's of course also much better informed than me about the potential of this new technology.
We've given the bots a way to dream
Now there’s a good model
Holding a conversation about AI can be very frustrating because most people involved frame the topic through whatever lens they've already settled on. And I find people tend to view it only as it is now, not what it will or could be. Just this morning a fellow software developer was disparaging its capabilities since a 6-yr old child can outperform it, yet I find it amazing that we have technology that even warrants the comparison.
Concerning world models and Genie 3, and I could write for days, but one component of a future I can imagine is using it to 'personalize' media (films/games/music) in ways that better suit our individual needs (someone will produce a version of 'The Phantom Menace' without Jar Jar Binks). These would of course be shared with others like mods are already (cue copyright infringement discussion). It may well be that the 'definitive' version of a piece of media will be generated by the end users.
No thanks! I hate AI, it IS horrible for the environment and much of it is ONLY possible through egregious amounts of plagiarism and it sucks.
Whenever I see something like Genie 3, I'm impressed that it *exists*. Like, it's an impressive feat of software engineering. But that's all I can really say about it.
Do I think this is the first step towards building The Matrix? No, because we're already hitting the technical limits of how far machine learning can go without infinite memory. Could this be a fun video game? Probably not, you can already do text-based adventures with LLMs and they're terrible substitutes for an RPG video game or a tabletop dungeon master. Could I use this to make game prototypes? Probably not, because prompting doesn't give you the degree of control you need to make new gameplay mechanics.
We're 3 years into the "AI revolution". Enough talk about what's *possible in the future*, we need to see something actually useful enough to justify the billions of dollars of R&D spent.
Tim Sweeney posted a tweet arguing something along the lines that the real magic will come when world model tech can be integrated with existing models. I’ll give the Unreal gang another 12 months before passing judgment on whether or not that’s realistic.