Monday 1 December 2008

Harvard Refs

I decided that since it's not going to be possible to write up notes on everything that I have read so far, I will just put up some of the sources that will go into my references and bibliography. 

DeLoura, M. 2000. Game Programming Gems. US: Charles River Media

DeLoura, M. 2001. Game Programming Gems 2. US: Charles River Media

DeLoura, M. 2006. Game Programming Gems 6. US: Charles River Media

DeLoura, M. 2008. Game Programming Gems 7. US: Charles River Media

Rabin, S. 2002. AI Game Programming Wisdom. US: Charles River Media

Rabin, S. 2004. AI Game Programming Wisdom 2. US: Charles River Media

GameDev. [Online]. Available from: http://www.gamedev.net [Accessed October 2008]

Gamasutra. [Online]. Available from: http://www.gamasutra.com [Accessed October 2008]

Smed, J. And Hakonen, H. 2006. Algorithms and Networking for Computer Games. England: Wiley

Vaart, Elske. And Verbrugge, R. Agent-Based Models for Animal Cognition: A Proposal and Prototype. [Online] Available from: http://www.aamas-conference.org/Proceedings/aamas08/proceedings/pdf/paper/AAMAS08_0555.pdf [Accessed September 2008]

Tu, X. And Terzopoulos, D. Artificial Fishes: Physics, Locomotion, Perception, Behaviour. Department of Computer Science, University of Toronto. [Online] Available from: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.33.8083 [Accessed September 2008]

 

Goldenstein, S. , Large, E. And Metaxas, D. Dynamic Autonomous Agents: Game Applications. Center for Human Modelling and Simulation, Computer and Information Science Department, University of Pennsylvania. [Online] Available from: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.8837 [Accessed October 2008]

Go, J. , Vu, T. And Kuffner, J.J.K. Autonomous Behaviours for Interactive Vehicle Animations. School of Computer Science, Carnegie Mellon University. [Online] Available from: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.1577 [Accessed October 2008]

Reynolds, C,W,R. Steering Behaviours For Autonomous Characters. [Online] Available from: http://www.red.com/cwr/. [Accessed October 2008]

Reynolds, C,W,R. Interaction with Groups of Autonomous Characters. [Online] Available from http://www.red3d.com/cwr/papers/2000/pip.pdf. [Accessed September 2008]

Intrinsic Algorithm. 2008. [Online]. Available from: http://www.intrinsicalgorithm.com/. [Accessed November 2008]

A Primer on Artificial Intelligence Technologies. 2000. [Online] Available from: http://users.erols.com/jsaunders/papers/aitechniques.htm. [Accessed November 2008]

Amit’s A* Pages. 2008. [Online]. Available from: http://theory.stanford.edu/~amitp/GameProgramming/. [Accessed November 2008]

Game AI. 2007. [Online]. Available from: http://www.gameai.com/. [Accessed November 2008]

Generation5. 2004. [Online]. Available from: http://www.generation5.org/articles.asp?Action=List&Topic=Artificial+Life. [Accessed November 2008]

Bourge, D. and Seemann, G.  2004.  AI for game developers.  1st ed.  USA: O’Reilly.

Fairclough, C et al.  2001.  Research directions for AI in computer games.  Department of Computer Science, Trinity College, University of Dublin.  [online]  Avaliable from: https://www.cs.tcd.ie/publications/tech-reports/reports.01/TCD-CS-2001-29.pdf  [Accessed on September 2008]

Diller, D et al. Behaviour Modeling in Commercial Games. [Online] Available from:http://seriousgames.bbn.com/behaviorauthoring/BRIMS_Behavior_Authoring_in_Games_2004.pdf [Accessed on September 2008]

Anderson, F. 2003. Playing Smart – Artificial Intelligence in Computer Games. [Online] Available from: http://old.zfxcon.info/zfxCON03/Proceedings/zfxCON03_EAndersonText.pdf [Accessed on September 2008]

Laird, J.E.L. and Lent, M.V.L. Human-Level AI’s Killer Application Interactive Computer Games. [Online] Available from: http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1558/1457

[Accessed on October 2008]

Brooks, R.A.B. Intelligence Without Representation. [Online] Available from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.12.1680 [Accessed on October 2008]

Guy, W. 2008. AI and Artificial Life in Video Games. 1st ed. USA: Charles River Media.

Mark, D. 2008. Multi-Axial Dynamic Threshold Fuzzy Decision Algorithm. In: Rabin, S. AI Game Programming Wisdom 4. pp.347- 358.

Buckland, M. 2005. Programming Game AI by Example. USA: Wordware Publishing Inc. 


Theres alot more to come aswell!

Tuesday 25 November 2008

Notes on an Eco-System

These notes were taken from wikipedia and the bbc bitesize websites

An Eco-System is composed of living elements sharing the environment with non-living elements.

 

Small scale (micro) eco system eg a pond

Medium scale (messo) eco-system eg a forest

Large scale (biome) eco-system eg a tropical rainforest

Sunlight is the main source of energy – allows the plants to live which allows herbivores to live which allows carnivores to live (the food chain).

 

The term eco system was coined in 1930 by Roy Clapham, to denote the physical and biological components of an environment considered in relation to each other as a unit.

 

Biomes are defined based on factors such as plant structures (such as trees, shrubs, and grasses), leaf types (such as broadleaf and needleleaf), plant spacing (forest, woodland, savanna), and climate.

 

Influencing Factors:

  • Elevation
  •  Humidity
  •  Drainage
  •  Salinity of water
  • characteristics of water bodies
  • Climate
  • Human influences such as grazing, hydric regimes,

 

Introducing new elements, biotic (living) or abiotic (non living) into an eco-system tend to have a disruptive effect. Sometimes this can lead to ecological collapse or "trophic cascading" and the death of many species belonging to the ecosystem in question.

 

Under this deterministic vision, the abstract notion of ecological health attempts to measure the robustness and recovery capacity for an ecosystem; i.e. how far the ecosystem is away from its steady state.

 

Often, however, ecosystems have the ability to rebound from a disruptive agent. The difference between collapse or a gentle rebound is determined by two factors -- the toxicity of the introduced element and the resiliency of the original ecosystem.

 

Ecosystems are primarily governed by stochastic (chance) events.

An ecosystem results from the sum of myriad individual responses of organisms to stimuli from non-living and living elements in the environment. As the number of species in an ecosystem is higher, the number of stimuli is also higher.

 

 Mathematically it can be demonstrated that greater numbers of different interacting factors tend to dampen fluctuations in each of the individual factors. Given the great diversity among organisms on earth, most of the time, ecosystems only changed very gradually, as some species would disappear while others would move in. Locally, sub-populations continuously go extinct, to be replaced later through dispersal of other sub-populations.

 

Stochastists do recognize that certain intrinsic regulating mechanisms occur in nature. Feedback and response mechanisms at the species level regulate population levels, most notably through territorial behaviour. Andrewatha and Birch[12]suggest that territorial behaviour tends to keep populations at levels where food supply is not a limiting factor. Hence, stochastists see territorial behaviour as a regulatory mechanism at the species level but not at the ecosystem level.

 

 

An ecosystem is an environment containing a community of interdependent plants and animals. Food chains link animals to the plants/animals they eat and the animals that eat them.

 

Ecosystems are made up of both non-living (abiotic) and living (biotic) factors.

  • Abiotic factors are the elements of an ecosystem that are non-living. Nevertheless, they still have an affect on the ecosystem. Water, temperature, relief (height above sea level), soil type, fire, and nutrients are all examples of abiotic factors.
  • Biotic factors are the living elements of an ecosystem, i.e. plants and animals. All biotic factors require energy to survive. These living organisms form a community within an ecosystem.

 

The community within an ecosystem is linked together by food chains. Biotic factors become linked in a food chain when they eat one another. The start or bottom of a food chain is made up of producers, such as plants and algae. 

These producers are at the bottom as they do not eat other biotic factors for their energy. Instead of taking energy from food, producers get energy by converting it from carbon dioxide and water using sunlight (Photosynthesis).

Consumers eat other organisms to get their energy.

There are four types of consumer:

1.      Herbivores are organisms that eat plant matter (producers) to gain energy.

2.      Carnivores are organisms that eat meat to gain energy.

3.      Omnivores are organisms that eat both plant (producer) and animal (consumer) matter to gain energy.

4.      Decomposers are organisms that feed on the remains of dead plant and animal matter. They help to speed up the process of decay. They also assist in recycling nutrients back to producers in nutrient cycles.

 

In the same way as energy passes through the food system, so do toxins. A toxin is a poisonous substance. When it enters an organism it will be stored in the tissues of that organism.

·         Producers absorb toxins and store them.

·         The toxin will pass to an herbivore when it consumes the contaminated producer.

·         The toxin will pass to a carnivore when it consumes the contaminated herbivore.

·         The toxin will pass to a higher carnivore when it consumes the contaminated carnivore.

 

The more contaminated organisms a carnivore consumes, the more toxins it will amass. This process is called the bioaccumulation of toxins and can have undesirable affects on organisms near the top of a food chain.

Sunday 23 November 2008

Notes from the article "Beyond A*: IDA* and Fringe Search" by Robert Kirk DeLisle

Beyond A*: IDA* and Fringe Search

“Graph search techniques are ubiquitous in game programming”. The genre of the game does not matter, the basis of a game is inevitably formed by methods of graph search. The most popular genre of the moment FPS are normally very dependant on pathfinding methods that allow the NPC’s to move about in the game universe to perform various actions. This is also used in 2d/2.5d games which involve crossing a terrain or navigating a maze.

            Typical problems encountered within the pathfinding universe normally relate to trees. (The start of the tree is considered to be the root). The root is then expanded on producing a number of new nodes (child nodes). The normal process followed in 2d pathfinding is most apparent when each child node represents a movement direction. As the path to the goal is extended, each of the child node are further expanded.

            When problems are formulated in this sense, a graph traversal problem with starting and goal states inside the graph, we open the door to a number of algorithms.

Of all the available pathfinding algorithms, A* has emerged the most popular within game AI.

A* started with the breadth first search. In this search, all child nodes from the root node are expanded and explored before then algorithm progresses to the next level in the tree. If the goal node is not on the current tree level, the next level is accessed and all the children are expanded and evaluated. “Open” and “Closed” lists were then added to this algorithm as a modification by Dijkstra. This modification provided two fundamental capabilities:

  • The cost of a path to the current node is kept by the current node. The “Open” list can then be sorted by this cost which allows for a “best first” search strategy. This comes in particularly handy when the cost between two nodes is not always the same i.e. traversing swampland compared to dry land. This allows the best path to be biased away from costly paths.
  • All of the explored and evaluated nodes are stored in a sort of catalogue which stops the algorithm re-expanding nodes which it has already dealt with. This drastically improved the Breadth First search but more improvements were made through the introduction heuristics which allowed the incorporation of an “Informed Search” strategy.

 

The cost of  any individual node up to this stage is viewed as the cost from the target/goal node to the current node and is normally referred to as g(). We can significantly improve an uninformed search of this sort if we include an estimate of the remaining cost between the current node and the goal node. This is the heuristic calculation – h() – gives us another method to make a good estimate at the total cost of the path and again is very bias in directing the search toward the goal. To get the total overall cost from any individual node we use the calculation f() = g() + h(). H() should always be admissible or an underestimate of the cost to the goal from that node. If h() holds an over-estimated value, promising paths could be missed out from the search or just delayed, causing the calculation to become more expensive. A* and Dijkstra both follow the same general algorithm. However the associated cost now takes into consideration the estimated cost to the goal as well (the heuristic cost).

It isn’t really a surprise to see that the fundamental weakness of A* is in it’s management of the two lists of the explored and un explored nodes. The open list has to always remained sorted in order of cost, i.e. the top node has the lowest cost to the goal. A*  can create quite a high cost in terms of efficiency on the CPU as it constantly has to poll the two lists to see if a node has been evaluated yet. Although there have been many optimizations made to A* to try and speed it up a bit it doesn’t really matter as if the search space is huge, the application can experience serious loss of performance and possibly even stop functioning as the overall costs of maintaining the two lists increases with the size of the search space. Finding a path in a complex 3D environment can easily hinder A* with situations that may not be produced from a simple 2D pathfind.

 

A good example of showing the complexities of demanding situations would be to look at a Rubik’s Cube. Finding the quickest solution to a Rubik’s cube using A* can easily cause it to exceed it’s available memory after just a couple of minutes. This is because, in a 3x3x3 cube for example, any individual node can have as many as 18 child nodes to search. It doesn’t really matter if we place restrictions on the movement manipulations of the cube (for example, not turning the same side twice) would only limit a node’s children to 13 or so. Therefore after 8 turns, over 1 billion possible combinations appear. Therefore it is wise to find other methods to find a solution in a more timely and efficient manner.

 

The Iterative Deepening A*(IDA*)

 

IDA* is an extension to A*. This algorithm has a problem in that it is possible to evaluate the same node several times. This is because it kills the two lists and does not use them. However this problem can be catered for by carefully structuring how the nodes and evaluated and expanded. i.e. specific orders and stopping of backtracking.

 

It can sometimes also be self accommodating as nodes that are expanded earlier have a lower value of g() compared to nodes expanded later on. It should also have the same value for h() no matter when it was evaluated. A maximum threshold for the cost is defined and if a node exceeds this value, it will not be explored. If however after expanding all nodes that fall under this value and the goal has not been reached, the threshold is increased. The search must be reinitiated for the original node since no history is kept without the lists. Each node must then be expanded that fall under the new threshold. This might seem counter productive, but it costs less to re-expand a node than to store all expanded nodes in the lists and keeping them maintained. “In addition, the frontier nodes, those at the edge of the search that were not explored before, will always be greater in number than the number of expanded nodes below the threshold.” Therefore the cost of re-evaluating a node is smaller compared to the cost of expanding the new frontier. The ultimate goal is for the lowest possible overhead in memory, CPU time and the time for the actual search itself.

 

The Fringe Search Algorithm

In-between the two algorithms mentioned above is the Fringe algorithm. Similar to IDA* it expands nodes under the guidance of a cost threshold. But in this case, the frontier nodes are not lost. They are instead kept in two new lists called now and later.

The current node atop the now list is investigated. If the f() value exceeds the threshold it is moved to the later list. If the f() is lower, the child nodes are expanded and the current node is discarded. The child nodes are then placed on top of the now list.

The nodes are expanded much like the depth-first fashion of IDA* but this algorithm keeps the lists in a semi sorted state. Again like IDA* if the goal isn’t found under the current threshold value, it is increased. The later list is then made to be the now list and the search continues using the new now list. There is no sorting cost associated with the maintenance of the two lists. The extra memory needed to do this is less than A* on it’s own uses as there is no need to store all the previously explored nodes. There is no speed loss either compared to the loss when IDA* has to repeat searches from iteration to iteration.


Conclusion

There is an abundance of available pathfinding algorithms today. The biggest consideration behind picking one of these algorithms are the memory constraints and the time constraints (normally one has to be sacrificed for the other). A* is the most popular choice because of the degree to which it can be specialised and optimised for each applications. IDA* and fringe are very useful modifications of the traditional A* set of algorithms and could prove to be better compared to the usual approaches to pathfinding.


Notes from the Artical -"Designing a Realistic and Unified Agent Sensing Model" by Steve Rabni and Michael Delp

This entry focuses on developing a better Agent Sensing Model, more specifically, the agents vision.

Designing a Realistic and Unified Agent-Sensing Model - notes

With increased visual realism in modern day games, gamers expect to see game agents with the ability to sense the game world with greater fidelity and subtlety. Traditionally, this has been done in a very simplistic manner through a combination of view distance, view cones and line-of-sight testing. A similar approach is taken when applying agent hearing. A simple method of just testing against some pre-determined cut-off distance to check if a sound can be heard or not. Although these methods are simple and cheap ways to create agent sensing, they are rather transparent and can produce rather shallow game play. As an example, if the player is somewhere in front of the agent, the agent will perform a discrete distance check to see if the player can be seen. This creates a blind zone beyond the maximum distance the agent can see. Many players are now aware of this flaw and exploit it to their advantage by momentarily appearing within the agents sight range, then running away, effectively luring the agent away from it’s position.

            This method is going to enhance this basic sense/perception model by applying a handful of clever additions that will make the agents perception model appear much more life-like and realistic.

Basic Vision Model

The core model used for agent vision model in many modern day games comprises of three main techniques and calculations. These techniques – which are usually calculated in this order due to efficiency – are as follows; view distance, view cones and line of sight.

{Insert Picture}

 

The view distance calculation is a rather simple distance check. It is more efficient to use the squared distance in the test rather than the actual distance to avoid performing a square root calculation. For example, if an agent is standing position (0,0,0) can see 50 meters away and the player is at  (25, 30,0), we can take the dot product of the vector between the agent and player and compare this against the view distance squared. (The dot product of the vector between the two is 25^2 + 30^2 + 0^2 = 1525. 1525 compared to the squared view distance of 2500 means that player can be seen by the agent as 2500>1525). The distance can be squared as we are only after the relative distance.

            After this, the view cone check can be done. To do this the dot product between the agents normalised forward vector and the normalised vector that goes from the agent to the player. If the result from this calculation is greater than 0 then the player is within a 180degree view cone from the agent, if the result is more than0.5, the player is within the agents 120 degree view cone (cos 60 = 0.5). If only the 180degree cone is needed, we can optimize this by not normalising the vectors (which potentially eliminates two square root calculations).

            The line-of-sight test is usually the final check to perform, and is the most demanding of the three. A ray is fired from the agent to the player’s location. If the ray intersects an object before it reached the player, the agent cannot see the player. If bounding boxes are used for the geometry and other in game objects then this test can be optimised slightly.

It is these three core methods, outlined above, that lay the foundation for the agent’s vision model. These techniques will be expanded upon and improved to provide a better, more realistic model.

Augmenting the Vision Model Toolbox with Ellipses

The methods mentioned previously do not model realistic human or animal vision very well. View Cones in particular have several flaws.

  • Potentially the agent will not be able to see any other agents or entities that are right next to it.
  • The visual acuity is at it’s highest at the centre of the vision cone and degrades as the distance increases. Far vision is overestimated and the near vision is underestimated.
  • To avoid giving an agent a very good far vision, designers and developers tend to make the view distance unrealistically short.

 

One way that some developers have tried to get around this, is by using multiple vision cones in attempt to try and model something similar to human vision. The problem with this approach is that it can create large holes or blind spots in the agent’s vision.

{insert Picture}

On the left, the thin cone represents the centre of focus which reaches far into the distance. The wider cone gives a much broader view but has a short range. The circle around the agent detects any entities adjacent or behind the agent (this is used to mimic the fact that humans often have the ability to sense when someone is behind them). Note the blind spots produced at the intersection points of the two cones.

 

A simple way to solve this would be to represent the vision model using an ellipse. As can be seen by the picture, the ellipse offers up a solution with the “degradation of visual acuity with distance without leaving holes in the vision”. The ellipse is started behind the agent (again to model a humans ability to sense people behind them).

Ellipse Implementation

To be able to produce an accurate vision model using an ellipse, it is very important that we understand its components.

{insert picture}

Major axis length is 2a, minor axis length is 2b. F1 and F2 (focal points) are at +/- c from the ellipse centre and c^2 = a^2 – b^2.

Middle figure displays important fact about the ellipse. The distance from the 2 focal points to any point on the outside of the ellipse is 2a. To determine if something is within the ellipse, the focal point positions must be found.

For human vision, one end of the ellipse is placed at the agent’s eye and the view angle specified. A triangle is then formed from the view angle at the agent’s eye and the ends of the axis in the centre of the ellipse. Given that Theta is half the view angle and a is half view distance we find the equation of c given theta and a.

{insert equation}

            To find out if an entity can been seen by the agent, we simply have to take the entities distance from each of the focal points, add them together and check that they are less than the maximum viewing distance of 2a. Therefore all that is needed is two distance checks for each entity in question per agent. Note that squared distances cant be used in the equation as we have to add them together. This works for 3d and 2d. If height becomes important in the game, we will use 3d.

            Using this ellipse to model the vision is easy to calculate in not really much more expensive than the view cone method.

Wednesday 19 November 2008

Notes from the Paper "Interaction With Groups Of Autonomous Characters" by Craig Reynolds

For my project I am going to have to find the best method that will allow the construction of a large group of autonomous agents which will be able to respond to the users interaction while maintaining a decent frame rate. The agents will need some sort of mental model that will allow them to actively select which behavioural goal they should be after, and control the agents steering.

Intro

Our world is a very active and populated place. Therefore why aren’t games. Games in contrast are usually very static or desolated. (With the exception of a few). Normally only a few agents move autonomously and a few areas on the game environment move in predefined cycles. My project will have agents that will be able to react to and coordinated their movements in respect to not only each other, but to the environment and the player as well.

Three key concepts will be looked at.

  1. Behavioural Models: The programs that serve as the brains of the character.
  2. Spatial data structures: used to calculate locality queries to help with performance.
  3. The techniques used to drive the movement from the behavioural models.

Related Work

Generally, most games employ some sort of autonomous characters. Any or all in game agents that are not directly influenced by the player must have some sort of autonomy associated to them. The historical trend has been for autonomous agents to just follow pre-built scripts and very little to no ability to react to a dynamic environment, going towards agents that have to level of reactive ability and then toward agents that have the ability to learn. Many games are now looking to be able to create autonomous characters that can react and respond to the user.

Behavioural Models

Autonomous agents rely and need some sort of behavioural model/ controlling program to drive them. In the game environment, objects can fall under several categories; static, player controlled, periodic/running a pre-defined animation or they can have a degree of autonomy. Autonomous agents are driven by a controlling program which makes use of a behavioural model. The agents behaviours and actions are mapped to the environment by the controlling program. The agent has two environments. The external environment – the game universe that it resides in – and the internal environment – it’s memory or other mental/ cognitive processes. The level of autonomy that an agent has can vary widely.

A very simple representation for the physical representation of the agent can be used. A simple point mass that has a velocity and a local frame of reference. The local system can be updated every frame, making sure the mass and velocity are always aligned correctly. A finite steering force could be applied to move the agent and the velocity can be capped to represent appropriate levels of friction or drag.

“The position, velocity, orientation and visual appearance of the character is driven by the behavioural model, primarily through control of the steering force”

An autonomous agent should have multiple yet distinct behavioural states. There should be an active control in each state and take appropriate actions based on it’s perception of it’s own local and immediate environment. However, this will be tempered by its currently active behavioural states. Eg, a jet low on fuel take different responses when patrolling than when in a dogfight. Changes in an agents states are triggered by internal and external environmental conditions. Eg spotting an enemy changes you from wander to engage.

Panic within startled agents could be made to be contagious, so a normally calm agent would run away if several of it’s neighbours begin to run off. The urge to go wander away could also be contagious, so each agent would be sensitive to the percentage and amount of other agents that have wandered off. When a panicked agent runs, it will run a small distance before returning to it’s original spot (unless it’s being chased). An annoyance value could be present for each time an agent gets the panicked, the higher the value, the further the agent runs. It would then decay when it is calm.

Reacting to the User

The autonomous agents should be able to react to the user in two different ways – discrete and continuous. The agents may want to keep their distance from the player, simply moving out the way when the player is far away and slowly moving toward the agent, compared to the player being close and moving fast, then the agent will run off.

Brief notes from the paper "Intelligence Without Representation" by Rodney Brooks

Intelligence Without Representation

The starting goal of AI was to enable a machine so that it could replicate human level intelligence.

Once people started to realise the magnitude and difficulty of this task, hopes started to diminish. Over the following 2 and a half decades there was very little progress made in producing isolated aspects of intelligence.

Requirements for “creatures” or agents.

  • The agent must be able to cope and respond in a timely fashion in a dynamic environment.
  • Any changes in the properties of the game world that the agent inhabits should not lead to the collapse of the agents behaviours, the Agent should be able to slowly change it’s behaviour to match the environment.
  • The agent should be able to maintain multiple goals and be able to switch between these goals depending on it’s current circumstances.
  • The agent must have a purpose in the world.a

Tuesday 18 November 2008

Notes from the paper "Behavioural Modeling in Commercial Games" by David E. Diller et al

Introduction

The computer games indusrty is now more and more concerened with developing sophisticated in-game characters.

 

Computer games developers aren’t the only people looking to create immersive simulated worlds, training application developers (such as the military) also have a great interest in this area.

 

Developers are wanting very realistic and robust behaviours that they can apply to their game agents. “As advanced high resolution graphics become commonplace, game developers are increasingly relying on “game AI” (i.e., behaviours of synthetic entities) to distinguish their game from competitors”.

 

The developers are very interested in creating entities that are more adaptive to new and unusual situations, not as predictable and therefore harder to play against.

 

Game developers are mostly interested in the “illusion of intelligence” and not actually making truly intelligent agents. i.e. the behaviours only have to appear to make their agent intelligent. Paul Tozour, AI programmer for Deus Ex 2 says regarding games:

            “The whole point is to entertain the audience, so no matter what you do, you need to make sure the AI makes the game more fun. If a game’s AI doesn’t make the game a better experience, any notions of “intelligence” are irrelevant.”

 

Behaviour Generation – Components of behaviour in games

Although the game universe that the agents operate in is obviously much simpler than real life, the agents have to be able to successfully display capabilities such as: sensing immediate surrounds (or the entire universe), reasoning with spatial layouts, planning then executing appropriate actions as well as being able to communicate with other in-game agents or players. To do all this the agents will need to be able to perform a very wide ranging set of functions.

 

Sensation and Perception

Sensory mechanisms that give game agents “sight” and/or “hearing” can range from simple to extremely complex. There is a huge difference when we compare the game universe that the player sees to that of the game agents. The game agents world is rather impoverished. The universe is normally stripped down and abstracted so navigation purposes. Collisions are detected by radiating some sort of check out from the agents current position. The sensory mechanisms normally only take into consideration objects that can actually affect the agent.

 

In the game Halo, an NPC’s ability to see the player is constrained. The NPC can only see the player if, the player can see the NPC. The reason for this was that the players often felt cheated if they were killed by enemies that they can’t see.

 

 

 

 

Decision Making

The most common representations for modelling decision making in game agents is through the use of FSM’s. The behaviours that agents have are modelled as a set of states that are finite. The transitions between each state are directed by a graph. The character can only be in a single state at a time. The transitions are driven by actions that happen in the game. FSM’s are cheap to utilise, simple to use and easy to understand. There are several extensions available to improve simple FSM’s, these are; FuFSM’s, Hierarchical FSMs and Probabilistic FSMs.

 

Smart terrains/ environments are also seeing a good deal of use in games. The objects within the smart terrain contain all the information needed for an agent to decide what to do with the object.

 

Several games have recently developed goal directed reasoning techniques for character behaviour. The characters using this technique have a set of goals given to them, and they must choose themselves which goal to go after. (how they do it in this case is usually hard coded into the game).

 

Smart Environments/Terrains

Models for agent behaviour are usually constructed from the point of view of the agent living in an environment with inactive objects. Some games do this the other way around and have very simple agents living in a complex world with smart environments to produce interesting behaviours.

 

An agents current primary goal is visible to the player. The player can then use this information to predict what the agent is likely to do next.

 

Conclusions

Games companies are increasing developing and making use of intelligent, virtual agents to help distinguish their game from the rest. Therefore we are going to continue to see vast improvements in game AI. There is already a movement away from the traditional simple FSM and scripting techniques to the more interesting and robust techniques that employ less predictable behaviour.

Notes from the Paper -"Playing Smart - Artificial Intelligence in Computer Games" by Eike F Anderson

Almost all modern computer games utilize realistic, high quality 3D animated graphics and 3D sound effects. These two aspects work very well to give the illusion of realism. However, this impression of reality can be smashed if the behaviour of the Non Player Characters (NPCs) is not natural or just doesn’t “feel right”.

 

The AI in most computer games isn’t really the same as academic AI. It’s more of a mix of techniques that are related to A.I. and are mainly concerned with giving a believable appearance of intelligence. AI doesn’t have to be incredibly complex as very little is required to fool the human brain, a complex A.I would actually be hidden and therefore hard for the player to spot. “The concept of “less is more” can therefore be applied to AI in computer games.” The biggest requirement for creating a captivating illusion of intelligence in games would be managing and controlling perception, I.e. organising and evaluating data from an agents environment which would mostly be sensory data but would also include the communication between multiple agents within a game world.

 

“The decision cycle of those NPCs constantly executes

three steps [van Lent et al 1999]:

1. perceive (accept information about the environment – sensor information)

2. think (evaluate perceived information & plan according actions)

3. act (execute the planned actions)”

 

At first glimpse this appears to be a very simple approach which may be inappropriate for creating captivating and entertaining game play. However, most computer games do not really need NPCs that are extremely capable, possibly even more so than the player themselves, as games are meant to be fun to play. If the game is too challenging, causing the player to constantly loose, it will loose it’s attraction and will therefore not be played.

 

AI was used in early games to produce believable adversaries to challenge the player. Depending on the genre of the game, RPG, arcade, RTS, different techniques and methods were used to create the A.I system, but the end result was always the same – create a believably life like adversary to give the player a challenging but fun and rewarding game play experience.

 

As games expanded and grew with the constantly evolving computers that ran them, incidentals, background agents, that did not contribute to the main story line started to make an appearance to help enrich the game universe. Game agents that could move about in the background, living out their ‘lives’.

 

High quality graphics no longer make a computer game stand out any longer. This has led to game developers increasing the complexity and believability of agents within the game universe. To make an agent seem life-like and intelligent, it needs to exhibit natural looking behaviour.

 

Since game agents have to work in real time, developers have to exclude many AI techniques so illusion is not spoilt by agents reacting slowly. One of the main issues affecting the real time requirements of AI is the fact that the AI has to be processed along with the games graphics, physics, sound, input, networking etc etc. Generally game AI was not viewed as hugely important and did not receive much attention, therefore not much processor times was reserved for it. With the advent of graphics accelerators in the mid 90’s, AI started to receive more attention as greater amounts of graphics processing was being moved onto dedicated graphics hardware.

The game agents in most modern computer games face the problems of

1.      Path finding or path planning

2.      Decision making

3.      Steering or motion control

It is a combination of relatively simple methods applied to these problems that create the illusion of artificial intelligence in computer games.

 

Both games and the AI techniques used in games have come a long way over the past two and a half decades. Usually the most favourable choices for AI techniques are the older, more established ones. These techniques have actually varied very little over time.

 “However over the past decade more and more novel ideas and methods for games AI have filtered into the game development process [Sweetser 2003] “

 

agents – Intelligent agents are NPCs that have the ability to make decisions and are made up from a combination of AI techniques.

Monday 17 November 2008

Proposal Presentation Speach

Intro

 

Hi, my name is David Higgins and I’m going to be investigating if it is possible for a combination of A.I. and A-Life techniques to not only be used to create an enduring eco-system, but also to calculate the boundaries that allow this system to remain in a state of equilibrium.

 

Issues

 

In a game situation, we could want an eco-system that we know is stable. On the other hand, we may want a system that, for whatever game related reason, is not stable.

 

I’m going to apply selected techniques to see if a believable Eco-system, populated by animals, can be created and then find out what the limits are that will keep the system in stability, otherwise it will die.

To give a Basic Example: Keeping it simple, we could say that if you have between ‘X’ and ‘Y’ amounts of ‘Variable’ it will die out in a week. Therefore if that’s what the game requires, we can just put these numbers in.

 

The techniques used are going to have to be able to allow the game agents to successfully navigate through the environment and fulfil their needs. Therefore I will need to find out if the techniques of: Fuzzy State Machines, Flocking, Path Finding and Chasing and Evading can create a realistic eco-system, and what the most effective way to apply the techniques would be.

 

Boundaries for the system will also have to be found, and these could range from something simple, such as the amount of available food, to the more complex, such as how far the agents can see, how strong they are etc.

 

Since there will be a great deal of variables in the system, some of these variables having a bigger impact than others, I’m going to see if the boundaries can be applied to a “God” AI that will be able to alter a select few of these variables so as to help keep the system in check (if that’s what the game requires).

 

Why it’s Important

 

As advanced high resolution graphics become common place, game developers are increasingly relying on gameAI to distinguish their game from the rest. With advances in processing power and dedicated hardware, there are more resources available that can be used for AI.

 

The illusion of realism that game developers work so hard to create can be easily smashed by poor or rigid A.I.

If, for example, the player, in an RPG game, can somehow introduce a disease that would ravage the environment, we could just tell the developers the limits to allow the system to die out over a period of time, making it much more realistic.

 

How to solve

 

To answer the research question, an extensive literature review (which currently on-going) into AI and A-Life will be needed. Since there are a lot of elements involved in a real life eco-system, a brief investigation into this will also be done so that all the elements of a real life system can be stripped down to only the most important one’s that will allow the construction a basic virtual eco-system.

The system would start off small and its boundaries found. More and more elements would then be added to the system and slowly but surely building it up producing larger and larger systems.

 

Why it’s Interesting

 

Most games don’t employ a truly persistent environment, that is, that the changes and influences made by the player are only temporary. The approach would allow me to create both a truly persistent or non persistent environment.

This approach could also allow the application to be used as a package, this package could then be built into a game as an eco-system.

 

Significance

 

The significance of answering the RQ would be that it would show that it is possible to greatly enhance the impressiveness of a game by creating a believable, life-like eco-system that you have great control over.

WorkSheet 4

The final worksheet.
No comments on this sheet, it was sent off for comments but was never returned.

Introduction

What is the topic and aim of the project?

This project will investigate Artificial Intelligence (A.I) with several aspects of Artificial Life (A-Life) and thetechniques that can be combined to create an effective life-like A.I. system.

Issues

What issues do you want to address.

Several issues will have to be addressed to be able to effectively fulfill the project aim and answer the research question.
Due to time constraints, it was decided to create an eco-system populated by animals. Animals were chosen over humans as it would be much easier to model their basic instincts and behaviours.

The techniques utilised will need to allow the agents to successfully navigate through the game universe in order to fulfill their needs such as feeding; drinking; hunting and evading; exploring; mating etc.

The A.I. technique will need to be able to represent the agents’ basic cogitative abilities and therefore must be able to determine the most appropriate course of action given an agent’s current situation, eg an agent needs to feed, therefore it should hunt down / locate a food source instead of "socialising and exploring" with with other agent’s of the same species.

These methods mentioned above need to be combined as efficiently and effectively as possible to see if the self-sustaining aspect of the eco-system can “emerge”.

If the system can become self-sustaining, boundaries that allow the system to stay in equilibrium will then have to be found and applied to the game universe.

These boundaries could then be applied to a "God" A.I. This A.I. technique would have knowledge of the entire universe and would be able to monitor the state of the eco-system. This would then allow any user or developer to receive an advance warning if the system starts to fall, or has fallen, out of balance. If desired, this “God” A.I would be able to adjust the balance of the eco-system to help maintain equilibrium. This would be done indirectly so that it would not appear to be cheating by spawning agents out of nowhere. Instead, it would be able to adjust a few specific variables that have the greatest effect on the game universe, such as the rate that vegetation grows at, how often agents need to feed or how often they breed. This would allow the user to experiment and “play” within these boundaries so as to not destroy the balance. 

Research Question

What is your current research question?

“Is it possible for a combination of A.I. and A-Life techniques to not only be used to create an enduring eco-system, but also to calculate the boundaries that allow this system to remain in a state of equilibrium.”

Addressing the Question

How do you envisage yourself carrying out the project – a short exposition of the project.

 

To be able to address the question effectively, a great amount of research will have to be conducted into the many existing A.I. and A-Life techniques that would be capable of accurately mimicking animal behaviour and managing an eco-system. This research has already begun and as such, the following techniques have been selected on the basis of best relevance to the question stated above: Fuzzy Logic; Flocking; Path Finding; Chasing and Evading; Potential Function Based Movement. An in- depth and extensive literature review of each of these techniques will now have to be conducted to find the most suitable. This review will consist of looking at the general theory behind each of the techniques, the pros and cons in relation to the research question and the specific ways that they can be, and have been, applied to game situations.

After the literature review has been concluded, the most relevant way to advance the investigation into the techniques listed above would be the creation of an application demonstrating the selected techniques in action. Since the application is going to model animal behaviour, it was decided that the most appropriate simulation to create would be that of a constant eco-system. This constant eco-system should be able to function and remain in a state of balance with or without user interaction. The agents within the eco-system will be almost fully autonomous and will go about their ‘daily lives’ just like real life animals. It is essential that the eco-system be able to reach a state of equilibrium so that:

  • Firstly, it can be observed if the selected techniques can perform such a task.
  • Secondly, so that the boundaries can be established that  will allow the eco-system to remain in this state of harmony. ( Since there will be a great number of variables being used in each of the techniques, the variables that will have the strongest effect on the overall outcome of each technique will be identified and these will be altered to establish boundaries.) 
  • And finally, if the selected techniques are capable of keeping the eco-system within a state of equilibrium after some form of ‘natural’ disaster and / or user interaction has upset this balance.

To be able to create this demonstration program, an appropriate engine must be sought out and reviewed.

 

The game universe that the eco-system will be demonstrated in will be made up of a terrain with randomly located lakes positioned throughout it. Around and nearby to these lakes will be vegetation. This vegetation will be used to give the herbivores a source of food, and a hunting ground for the carnivores to prowl when hungry.

 

Methods must then be chosen to be able to employ the selected techniques. The best way to select the methods would be through meticulous analysis of the literature review.

 

Progress

What have you managed to do so far and how has this influenced your vision?

 

General background research into A.I. has been conducted. This research is still continuing and there is still much more to be done. This general research has shown common problems encountered in the field of A.I. along with a few suggestions on how to get around them. Methods and techniques that previous commercial games have implemented along with the problems, successes and failures that these games have encountered have also been identified. 

Research into low level A-life techniques such as Flocking, Chasing and Evading, Fuzzy Logic, Potential Functions and Path Finding has also begun. There are copious amounts of information available on each of these techniques, however, the part that is proving most difficult is finding information that is relative to this project. There are no freely available code examples to examine, therefore all these techniques are going to have to be coded from scratch.