Unless you talk to game developers where a "follow the ball" "algorithm" for Pong classifies as AI, because it's controlling the behaviour of a game-world agent that's not the player. The term pretty much matches up with what game theorists (as in game theory, not computer games) call strategies. If people use ML for that kind of stuff it's not the approaches which make news nowadays because inference is (comparatively) expensive, stuff like NEAT churns out much more sensible actor programs as it evolves structure, not just weight.
AI in games is not AI in the CS sense, and that's probably where the confusion is coming from. AI in games uses the cultural definition that includes things like C3PO and whatnot, whereas AI in the CS sense is just about any algorithm that seems to learn as its environment changes, usually to find a better (more fitting) solution than the previous iteration. Game AI is generally just pathing and direct responses to stimuli, it doesn't really learn, so players can cheese the AI pretty consistently.
I think games using actual AI would be undesirable because it would make games involving AI much less predictable and probably way harder. It would also likely use way more compute resources.
I think games using actual AI would be undesirable because it would make games involving AI much less predictable and probably way harder.
You get something very predictable when you throw NEAT at flappy bird. And you don't need ML approaches to make game AI not fun. Take RTS games: In the beginning many AIs would be very simple and have access to essentially cheat codes to be half-way competitive, then programmers sat down and allowed it to path-find through possibility spaces such as economic build-up to formulate a strategy to follow so it didn't need to cheat, thing is those things are pretty much on or off: Either they suck badly and need cheating to survive, or they're so good they're getting accused of cheating. So you need to dumb them down to make them believable, make them make non-optimal decisions and mistakes in execution.
That's the main issue: Having a believable and fun opponent, not either an idiot or a perfect genius, and you don't need ML approaches to get to either. Most studios pretty much gave up on making AI smart they keep it deliberately simple, to the point where HL2 is still the pinnacle of achievement when it comes to game AI, second place going to HL1. Those troopers are darn smart and if the player couldn't listen into their radio chatter they would indeed appear cheaty, always appearing out of nowhere... no dummy they flushed you into an ambush. That is, Valve solved the issue by essentially letting the player cheat: The player gets more knowledge than the AI (the radio chatter), also, compared to the troopers the player is a bullet sponge. All of that is non-ML, it's all hand-written state machines, more than complex enough to exhibit chaotic behaviour.
AI has been co-opted by all the GenAI people. The number of times I've heard things like "AI is the next big thing for X business" and they're only talking about GenAI is way too high.
In that case it actually makes sense because the main goal is to make an artificial entity appear intelligent to the player. This is not the same as calling all ML algorithms/models AI.
The defining factor of AI isn't if it makes things appear intelligent. Games use fairly simple algorithms to handle their bots compared to things like in this article.
Pathfinding is literally just measuring all the paths and choosing the shortest one.
Just a few years ago, AI was a simple program that had hardcoded answers and inputs in if else conditional blocks and ML was ML.
After ChatGPT got popular, AI is no longer a simple if else program. The worst of all is that ML is no longer called ML.