Discussion about this post

User's avatar
Jordan Rapp's avatar

I had an at-least-for-me-interesting interaction with Tom Spencer-Smith on linkedin. You MIGHT remember Tom who was a contractor for Respawn when you were still there (cries inside) who did thin-client development. Basically, how can we automate more of our testing, especially of gameplay, to find the sort of weird edge cases that are the result of sheer volume. He's pretty much all in on AI and on things that are, in many ways, "generic." I.e., he likes unreal because it's very generic which is also why I don't really like it, because it's generic. I think we view unreal the same way, just with him thinking it's good and me thinking it's not good, though in both cases "good" applies to our own specific use-cases. i.e., I am glad unreal exists, but i am also glad i don't work on a game built on top of it.

anyway, re: AI, he said something kind of akin to Carmack, but much more specific and prescriptive in terms of how to actually write code that AI could then 1) understand and 2) expand upon. Basically, he argued, "write a lot of small functions" because AI is better at understanding things the more discrete they are.

And my response to this is basically, "so, treat it like a compiler."

Which I thought was, patting myself on the back, somewhat insightful. And I think that's really what Carmack is saying. He started out literally writing assembly language. That's now been made obsolete by modern high level languages.

And AI is *potentially* just like this. AI is to Python what Python is to C++ and what C++ is to x86 assembly. This is always what gets me about C++ developers who can be a bit (quite) snooty because they are *closer* to the bare metal than a python or javascript developer. This was most definitely a real thing at respawn. Which is not to trivialize the importance of actually understanding what is happening on the bare metal, or that somehow the various depth of a level of abstraction is irrelevant. It isn't. But like, it's also weird. I don't see the same kind of gatekeeping in other areas, though admittedly I don't know what I don't know. But I can't imagine a linguist taking Faulkner to task because he doesn't understand the actual way in which humans learn and process language. That doesn't diminish his writing. And likewise, I can't imagine a neuroscientist saying that the work of a linguist or a writer is somehow less valuable because they don't understand how language actually makes its way along synapses.

Last thought - per Carmack's "...don't use power tools..." is on the luddites and what they actually believed. They weren't, as commonly depicted, anti-tech. They were pro-worker's rights. I.e., what they objected to was that the machines were unsafe. And I suspect that's where the real discussion is to be had but it's currently wrapped too much in doomerism. Some of that is, IMO, justified here because of the way these models were "trained." I think the training of these models is an important part of the discussion and you see this in the big court cases where OpenAI and others are like, "we had to use copyrighted information." And I think that is new. I don't know if we've ever had a new technology that has been so reliant on *involuntary* use of other people's work. And I think that's a big part of why we can't have more nuanced discussion here. And, to me, it is the closest analog we have to the luddite's objections around safety.

Expand full comment
Jay Rooney's avatar

THANK YOU. The discourse around AI in gaming is frustrating and exhausting to no end. I hate how the most extreme, unhinged, and self-righteous voices utterly dominate the discussion. The backlash over The Alters and Expedition 33 was frankly ridiculous. It’s unnuanced, uninformed, mob mentality and reeks of virtue signaling.

I’m personally quite excited about the possibilities of generative AI for game dev, but mostly keep it to myself because I *know* exactly what the reaction will be.

Expand full comment
12 more comments...

No posts