What’s Threat Detection?
Threat Detection is a lively, smart, frequently funny and always irreverent videogame chat show on Radiomade.ie. Each week, hosts Gareth Stack & James Van De Waal take an hour or two to tear apart a videogame topic, like character, horror, or sex.
Download: Threat Detection – Episode 22
Why is artificial intelligence hard?
Primarily because we don’t really understand human intelligence. Especially as applied to areas like language and creativity. There’s a mysterious middle-ground between our understanding of what’s happening at a high level (cognitive processes) and low level (individual neuronal signalling). For example, neuroscientists still don’t know for certain whether ideas are distributed (hebbian cell assemblies) or located in individual neuronal cortices.
Traditional research in AI seeks to recreate general intelligence. Projects such as MIT’s Kismet* attempt to create an AI that can learn and interact socially as well as exhibit emotions. At time of writing, MIT is working on developing an embodied AI with the faculties of a young child, a long term project yet to bear fruit.
Natural intelligence is situated, physiologically, socially and environmentally. Evolved adaptive intelligence makes sense only in the context of what biologists call ‘the environment of evolutionary adaptiveness’. In other words, the environment it evolved to suit. Biologist Herbert Simon famously compared intelligence to the blade of a scissor, fitting into the environment it had evolved to process.
Natural intelligence is based on multi-level, dynamic, evolved biological systems. Natural selection applying at the level of the gene, the cell, the biological system, the organ, the individual, the group, the species. Not to mention sexual selection at the individual level.
For smart AI, you need a smart AIE (artificially intelligent environment). What does a smart environment look like? It has properties like mass, friction, location in space. It matches the capacities of it’s AI. It has a cohesive design that fits the affordances of the game universe.
But that’s not all there is to simulating intelligence. What about collective (socially situated intelligence)? AI that communicates – not merely its perception but what it has learned, remembered and predicted. This is how culture develops (even ants have culture by this definition) – even protozoa have (genetically acquired) learning.
Evolving Artificial Intelligence
Genetic algorithms will one day give us smarter game AI. But current progress is incredibly primitive. Today, evolutionary algorithms are sometimes used for procedural level generation or modelling the physical behaviour of characters. Developing genetic algorithms is hard. It requires perfectly specifying a problem space and its attendant fitness function (how to figure out if the problem has been solved). These must be manually implemented for each parameter of every aspect of behaviour required (not to mention play tested, balanced etc).
Emergent gameplay is fun – but only some kinds of emergent gameplay. If an AI fails too stupidly, it’s merely irritating. By contrast if an AI has adapted too well (to a problem space, or the players behaviour behaviour) it’s not fun either! Easily thrashing the player. Another wrinkle is the difficulty of defining ‘fun’ as a fitness function.
Instead, today’s game AI is provided by manually encoding known algorithms. We create problem solving heuristic systems interact in at best a quasi-non-deterministic manner. For example the famous A* Search algorithm for bot pathing. A variety of types of ‘weak AI’ have been developed which enable relatively autonomous decision making in bots and other in-game agents. Simple reflex agents, that act based on perception (no learning or previous knowledge required) – IF I see you, THEN I shoot! Goal based agents – that perceive their environment and act based on goals, with the capacity of learning from whats effective. Utility based agents, similar to goal based agents, but with the added capacity of evaluating states of the world for desirability. Expert systems: simple decision trees, based on pre-existing knowledge, sometimes capable of inferring based on formalised abstractions of existing knowledge. One example of this is Wolfram alpha – an expert system with natural language processing and many curated databases (which helps to power Siri and Google Now).
AI in Games
Early developments involved defined ruleset games in simple universe. The worlds of boardgames like checkers and chess. Since the 1950s chess programmes have been able to regularly beat a majority of non-expert players. Todays home programmes regularly defeat grandmasters. However in some games, like the ancient Chinese game ‘Go’, humans still have the upper hand.
Classic experiments in videogame intelligence include Steve Grand’s ‘Creatures’ series, which simulated intelligence at multiple levels – genetic, classical (pavlovian) and operant conditioning.