Chapter 3: Agent Objects


3.1 Introduction

Agents are software components that communicate with their peers by exchanging messages in a communication language [31]. While agents can be as simple as subroutines, usually they are bigger entities with some sort of persistent control and autonomy. What characterizes agents is their ability to communicate and cooperate with other agents.

Agents are at the heart of this software system, thus they lend their name to the program itself. The agent metaphor is employed at two levels: At a higher level, the servers Placer, Router, Broker and Database can be seen as large agents that communicate and cooperate over a network. At a lower level, inside the Router and Placer servers, small relatively simple agents work together to accomplish complex tasks.

These small agents are responsible for all the reasoning done by the Router and Placer servers, the large agents. The design philosophy is that competence should emerge out of the collective behaviour of a large number of relatively simple agents. These small agents are implemented as agent objects, the class Agent holds the basic inference routines and the derived classes add the particular knowledge needed for a particular application.

Before continuing with an explanation of the mechanisms of agent objects, it would be interesting to highlight the basic structures of cognitive systems. These structures are presented in more depth in Newell's book Unified Theories of Cognition [31], where the author reviews the foundation concepts of cognitive science and makes a case for unified theories by describing a candidate: an architecture for general cognition called Soar. As Guha and Lenat [34] define it, there are two paradigms for "software agents" today and one of them says that competence emerges from a large number of relatively simple agents integrated by some cleverly engineered architecture. In their opinion the architecture of choice for this paradigm is Soar.

3.2 Search and problems spaces

A system displays intelligent behaviour when it behaves in order to utilize its knowledge to attains its goals. This processing takes basically the form of a search.

Search, in this case, is not another method or cognitive mechanism, but a fundamental process for intelligent behaviour [31]. It is not one method among many that might be used to attain ends but the most fundamental process of all.

Newell [31] makes two considerations about the special role of search. One he called the existential predicament of intelligent systems: "When attempting to act, an intelligent system is uncertain. Indeed, that is of the essence of having a problem - it is not known what to do next". The system must then search for a solution and that search tends to become combinatorial because new errors are committed before old ones are detected and resolved. A search will occur, whatever method is used to solve the problem, and the more problematic the situation the more extensive the search will be.

The second consideration he makes is called the court of last resort: "If an intelligent agent wishes to attain some end - to find some object X say - it must formulate this task in some way. It must arrange its activities in some way rationally related to finding X. It will do so - indeed it can only do so - on the basis of available knowledge. Even to seek more knowledge before acting is already to chose a particular way to formulate the task. The more problematical a situation, the less knowledge is available that says how to find X - that's what means to be problematical". The formulation of the task that makes the least demands on specific knowledge is then:

"Formulation of last resort: If it's not known how to obtain X, then create a space to contain X and search that space for X".

A space, in this case, is the set of all the possible solutions for a problem. This formulation can always be used. A space can always be found that contains the desired solution, assuming that a solution does exist. The less knowledge that is available, the larger this space has to be, and the more difficult and expensive will be the search.

This formulation and the corresponding method for working with it, is usually called generation and test. Newell writes that "All of the methods used in artificial intelligence are at bottom search methods, built up as further specifications on generate and test. Means-ends analysis, hill climbing, progressive deepening, constraint propagation - all are search methods of one kind or another. And all build on generate and test".

It can be said that an intelligent system is always operating within a problem space. This space is created by an intelligent agent to search for a solution to any problem it is currently attending, it is the agent's attempt to bound the problem so it becomes workable. The agent adopts a problem space to solve a problem and inside this problem space it can set up sub-spaces. Inside these spaces, the agent is located in some state and it applies a set of operators to find new states. The agent undertakes this search process until its goals are fulfilled.

3.2.1 The blocks world

As an example, consider the blocks world. In this world there is a robot arm and some blocks arranged on top of a table. The robot has a camera and it is able to recognise each block and locate its position on the table. Each block is marked by a letter. The goal of the robotic system is to arrange the blocks in a certain way chosen by an external agent. As figure 3.1 shows, we can see the entire problem as the blocks world problem space, inside this space the block's disposition on the table are represented as states. To change the blocks deposition the robot can move one block at a time, these movements are represented by operators. The search happens through the states of the blocks world problem space, using operators to change from one state to another, until a desired state (the goal state) is reached.

3.3 Problem search versus embedded knowledge

There are two kinds of searches going on in intelligent systems. One is the problem search, which is the search of the problem space just described. The other is the knowledge search, which is the search in the memory of the system for knowledge to guide the problem search. In general, intelligent systems engage in both knowledge searches and problem searches. This leads to a fundamental trade-off for all intelligent systems, the preparation vs. deliberation trade-off.

When forced to respond to some stimuli a system can deliberate - engage in activities to analyse the situation and the possible responses. This will lead to a search for an appropriate response in some space. Or the system can also have various responses or aspects of responses already prepared and stored. To use such preparations the system must access memory, retrieve them, and adapt them as appropriate to each case. In general, each specific situation calls for some mix of deliberation and preparation. Deliberation will demand search and preparation embedded knowledge.

Based on how much a system relies on search or embedded knowledge, Newell proposes the graph in figure 3.2, which depicts a space with deliberation and preparedness as dimensions. Particular classes of systems can then be located at particular places on the graphic:

It is common in artificial intelligence (AI) and cognitive science to talk about human intelligence and the intelligence of systems like Hitech in contrasting terms. Hitech is a brute-force searcher that seems to operate in an entirely different way from human intelligence. Figure 3.2 suggests otherwise. Certainly, different types of intelligent systems occupy different regions in the preparation-deliberation space, but systems like Hitech are to be analysed in the same way as expert systems and human are. They occupy different regions but the analysis is the same.

3.4 The problem-space computational model

Agent objects follow a similar structure as the problem-space computational model proposed by Newell [33]. Soar and Agent objects are two possible implementations for the problem-space computational model. Both create problem spaces to search for a solution. Inside these spaces they have states and they apply operators to find new states during the search process. They perform searches until they reach their goal. Soar is bigger and more sophisticated than Agent objects, but the later is better suited for an architecture where many simple agents work together.

3.4.1 Basics

The knowledge an agent uses to search a problem space can be divided in two types: task knowledge and search control knowledge [33]. Task knowledge consists of the initial state, the desired state (or any means to detect it) and the operators. Using just this knowledge a solution can be found just by exhaustively searching all the problem space until the goal state is found. This can be very inefficient. Search control knowledge specifies which operator to take from a given state, directing the search to the desired goal. If a system has appropriate search control knowledge it will know which operator to take at each step so it can reach the goal state without any search at all. If a system doesn't have enough search control knowledge, the system will acquire additional knowledge through search to determine which operators to take. The blend of these two kinds of knowledge affects the efficiency of problem solving, but the correctness of the solution should depend only upon the task knowledge. In this way, task knowledge can be used to have an application up and running and gradually, later, search knowledge can be added to enhance performance.

Figure 3.3 shows the flowchart of a problem space. Before a problem space can begin work, the initial state and knowledge of the goal must be available. These are set by Formulate task. Once the problem space, goal and initial state are known, Select operator chooses an operator to apply to the current state and Apply operator applies it to the current state to produce a new one. Terminate task then checks to see if the new state is the goal one or if success is not possible. If it returns true, execution is halted, otherwise control go back to Select operator.

A problem space must have knowledge to implement the functions in figure 3.3. For example it should know how to propose and chose operators. When a problem space does not have the knowledge to implement one function, an impasse occurs, no further problem solving can be undertaken in this space until knowledge is generated to solve this impasse. There are four types of possible impasses:

Impasses are solved by formulating a subgoal to acquire missing knowledge. The subgoal is set up as a task to be solved by another problem space or it can be delegated to other agent or agents (Soar uses only the first option). The system uses Formulate task, figure 3.3, to select and initialise the new problem space. The original problem space, where the impasse occurred, is responsible for supplying the knowledge to implement Formulate task. If this knowledge is unavailable, a new impasse occurs and a new subspace is created to search for this knowledge.

Impasses can occur in any problem space, forming a goal/subgoal hierarchy with spaces and subspaces in one or multiple agents. The top most space represents the agent's primary goal.

3.4.2 A blocks world example

Figure 3.4 shows how problem spaces can be used to solve a problem in the blocks world. The robot's arm is trying to arrange three blocks in a pile: A on top of B on top of C. In the figure, the squares represent the states of the problem space, the arrows represent the application of an operator and, on top of the arrows, are the names of the operators being applied. The top problem space is the Blocks world, its goal is the global goal. When processing begins, in the initial state S1, the operator Move C to Table is the only one proposed by Select operator (it is the only possible legal move) and it is then chosen. Apply operator then applies this operator to S1 to produce S2. Terminate Task then decides that the goal has not been reached and the system goes back to Select Operator. Now two operators are proposed to S2: Move B to Table and Move B to C. As the system doesn't have any knowledge to decide between the two, there is an operator tie impasse.

A subgoal is formulated to acquire knowledge to break the impasse. The system creates a new subspace, called Selection, to achieve the subgoal. Selection knows how to do a lookahead search to find which of two or more operators does work: it evaluates each one to see which will lead to the goal state. The operator Move B to C is tried first (any operator could have been the first), the system proposes and selects the operator Evaluate: Move B to C. As the Selection space has no directly available knowledge about how to apply the operator, an operator no-change impasse arises. A new subgoal is set up to break this new impasse. The Search problem space is created. Search space knows how to evaluate operators: it creates a copy of the Blocks world space, applies the relevant operator, in this case Move B to C, and continues the problem solving until the result of applying the operator is known, in this case, until it knows if the goal state can be reached or not. After applying Move B to C to S2' to produce the state S3', the operator Move A to B is proposed and applied (it is the only legal move) and the goal state is produced.

As the lookahead search shows that Move B to A leads to the desired state, Selection space indicates that this is the best operator to chose in the context of the original problem. Select space selects Move B to A over Move B to Table and the impasse is resolved. As in the Blocks world problem space the goal state has yet not been reached, the operator Move A to B is proposed and applied (again the only possible legal move). Terminate task detects that the state S4 is the goal state and execution is halted.

3.4.3 Selecting values

The knowledge to implement the functions of the problem space computational model, figure 3.3, is expressed in the form of production rules. To create or change problem spaces, states or operators, these rules propose values and/or express preferences for selecting values among a list of proposed ones. Preferences are knowledge about the desirability of selection of any proposed value. To make a choice based on preference the system applies knowledge to propose choices, then knowledge to produce preference to order the choices. Once all available knowledge has been applied, the choice that was ranked above all others is chosen. There are nine possible kinds of preferences:

3.5 Distributed reasoning

The great majority of AI programs and models use centralized processes. Could distributed parallelism lend their flexibility and computational power to AI or does intelligence have to have a central place where everything comes together?

Philosophers, like Descartes, believed in a central place or focal point in the brain where all the senses would come together. For some, that would be the point of interaction between mind and brain, the point where the ghost touches the machine. This concept of a place where the conscious experience takes place, the Cartesian Theatre, would suggest a centralized model for intelligence. One that is in tune with our common sense.

But this particular region in the brain, the Cartesian Theatre, has not been found yet. Indeed, studies on the visual cortex have not found, so far, one particular region in the brain where all the information needed for visual awareness appears to come together [40].

A distributed intelligence model would not only solve some important "implementation" problems, like speed, but would fit better with results from recent mind studies [41][42]. In Consciousness Explained, Dennett [43] proposes such a model, the Multiple Draft model. It asserts that all varieties of perception - indeed all varieties of thought or mental activity - are accomplished in the brain by parallel multitrack processes of interpretation and elaboration of sensorial inputs. Information entering the brain is continually being edited.

3.5.1 The hive mind

In nature, the human brain would not be the only example of a distributed reasoning system, simpler systems do exist and taking a look in one of them, a swarm of bees, could be very useful.

When bees need to relocate a colony, they have a search problem to solve and they use a very interesting distributed mechanism [37]. They form a swarm and pour themselves out into the open. During these events the queen bee is not in command, she merely follows the flow of events. Some scout bees are sent ahead of the swarm checking possible hive locations. They report back to the swarm dancing near the swarm's surface. During this report the more enthusiastically a bee dances, the more other bees will be compelled to visit the reported site. The bees will inspect those sites whose scout's dance they liked most.

When each of these bees returns from its inspection, it supports the site by joining the scout that is dancing for that site. That induces more followers to check out the leading sites and joining in, when they return, the performance of their choice. Few bees, apart from the scouts, visit more than one site. Gradually one large finale will dominate the dance-off. The biggest crowd wins.

Kelly [37] writes "It's an election hall of idiots, for idiots, and by idiots, and it works marvellously". The swarm, as an ant colony, behaves more like an individual than a group, but the bees are probably unaware of the swarm. They have a set of simpler individual behaviours that add up to very complex group behaviours. The whole is far smarter than its parts.

3.5.2 Defining behaviour systems

In searching for a new site a swarm is acting as a behaviour oriented system. A behaviour approach starts from the view point of behaviours as the fundamental unit of analysis. A behaviour is a regularity in the interaction dynamics between an agent and its environment [45]. For example, it may be observed that an agent maintains a certain distance from a wall. As long as this regularity holds, observers may say that there is an obstacle avoidance behaviour.

To realize a behaviour, there must be some sort of mechanism in the agent. This mechanism should be implemented using different components and a control program. The observed behaviours are due to the interaction between the operation of the mechanism and the environment the agent is experiencing. A behaviour system is then defined as a collection of components responsible for realising a particular behaviour.

Using this model, small robots can be built that can show quite interesting behaviours while using few hardware or software resources. Among these robots there is a group of small, six legged ones called insect-like robots.

3.5.3 Insect-like robots

Genghis is a cockroachlike robot the size of a football, built by Rodney Brooks at the MIT (Massachusetts Institute of Technology) [37]. Genghis has six legs but no central brain. Its 12 motors and 21 sensors are distributed in a network without a centralized controller. Yet the interaction of these "muscles" and sensors achieves a complex lifelike behaviour. Each of the robot's legs works independently of the others, each one has its own microprocessor to control its actions. To coordinate communications between the legs there are other microprocessors. The walking process is a group activity involving all legs. Entomologists say that this is the same way that real cockroaches cope - they have neurons on their legs to do the thinking.

Walking in Genghis emerges out of the collective behaviour of its legs. Two motors in each leg lift, or not, depending on what the other legs around them are doing. If the motors activate in the correct order, walking happens. Walking is not governed by any particular processor, there is no smart central controller. Brooks called it "bottom-up control" [38][39]. If you snip off one leg it will shift gaits with the other five without losing a stride, this is an immediate self-reorganization.

Genghis legs have few simple behaviours and each independently knows what to do under various circumstances. For instance, two basic behaviours can be thought as "If I am a leg and I'm up, put myself down," or "If I am a leg and another leg just went forward, I should go back a little". These processes exist independently, run at all times and fire whenever the sensory preconditions are true. To create walking then, there just needs to be a sequence of lift legs. As soon as a leg is raised it automatically swings itself forward, and also down. But the act of swinging forward triggers all the other legs to move back a little. Since those legs are touching the floor, Genghis moves forward.

Once Genghis can walk over a flat surface, other behaviours can be added to improve its walk, such as climbing over a small obstacle. These new behaviours are added on top of the existing ones. The behaviours are organized following the subsumation architecture [46], shown on figure 3.6. The subsumation architecture divides the control architecture into task achieving modules or behaviours. Instead of dividing the problem into sequential functional modules, the problem is sliced into layers of behaviours (fig. 3.6), each layer forming a competence level of a control system [47]. The main idea is that layers corresponding to different levels of competence can be built and added on top of each other, each new layer adding a new level of overall competence to the system.

The behaviours in a lower layer are unaware of any other behaviour belonging to a layer higher than theirs. When a behaviour in a higher layer wishes to take control, it can subsume the role of lower levels, inhibiting them (inhibition line in figure 3.6). New behaviours will overpower others, and thus get expressed, only on those situation where their action will improve performance or initiate a newly added response, otherwise the old behaviours will do business as usual, which means compete to get expressed. This system is easily extensible as new behaviours just add some functionality to an already working system.

Genghis is an example of how an artificial behaviour system can work, some of its ideas will be explored in the implementation of distributed behaviour of agent objects.

3.6 Implementation of Agent Objects

The agent objects' implementation details have not been discussed up to now, the problem space has been viewed as a knowledge level system. States of the problem space were described according to their knowledge contents, and operators according to how they change the content of a state. No particular representation was used for the knowledge.

Figure 3.6 shows the architecture of a C++ object derived from the class Agent, in this case this object is controlling a robot in a blocks world. The Goals list contains the current hierarchy of problems spaces, organized in a goal context stack. Each goal context contains a goal, the problem space being used to search for that goal, the state slot of the program space, and the operator currently being applied. The Preference list contains values proposed by the rules with their respective preferences. Internal variables are any kind of variables or objects held by a particular derived agent. The in and out triangles represent accesses to other objects or variables outside the agent object.

Task and search control knowledge are encoded as production rules in permanent memory. These rules test the state of the Goals list, internal variables and the outside world and when fired, they can act on the internal variables or on the outside world or produce preferences for changing the Goals list elements. The production conditions are C++ language's if statements, they can have any kind of statement allowed by C++, including function calls. Matching routines are not supplied by the class Agent, since there aren't facilities to match templates against working memory elements, such as in Soar or OPS5 [43]. In the rules' condition section, objects derived from the class Agent have to perform the comparisons themselves or rely on the object being tested to supply some form of matching method. For instance, list objects have methods to match templates against their contents.

Objects on the Goals list are represented as slots. A slot is a list where the first element is the slot's identifier and the others are slot values. All slots have, at least, an identifier and they can have any number of values. Each element in the Goals list is a list representing one goal and a problem space. The last list represents the top goal:

 ( ( (NAME GOAL_11)
     (PROBLEM_SPACE ( (NAME BLOCKS) ... )          )
     (STATE ( (NAME FIRST) (TABLE OK) ... )           )
     (OPERATOR ( (NAME MOVETO) (POSX 5) (POS_Y 7) ... )              )
   ) 

   ( (NAME GOAL_10) ...
   )
   ...
 )

When there is an impasse a new goal is automatically created in the Goals list with data about the impasse. The Goals list should not be directly modified by the rules action, rules should instead propose values or add preferences to the Propose list. The result of these preference judgements should determine changes on the Goals list. However to enforce this prohibition in C++ would be very difficult and costly. If the user wants, he can override this rule.

The class Agent holds the basic inference routines, but the derived classes should add the knowledge, in the form of rules, specific to a particular application. They do that using the virtual method expert(). Derived classes redefine this method and define their rules on it. The class Agent then uses the method to apply the rules, because this is a virtual method, the class does not need prior knowledge about the rule themselves. The following is an example of a simple rule:

 RULE( "Cont*propose*operator*createColumns",
   isGoal(CREATE_COLUMNS, G1) &&
   isState(G1, CREATE_COLUMNS_1)
 ) {
   SET_SLOT( G1, OPERATOR, 
             new_LIST(new_LIST(NAME, CREATE), new_LIST(POS, 3, 2)),
             ACCEPTABLE);
 }
This rule just tests if there is a goal called CREATE_COLUMNS and if this goal has a state called CREATE_COLUMNS_1. If yes it proposes a new value for the operator slot of the goal, this operator is named CREATE and has a position slot named POS with two values 3 and 2. The preference for this value is ACCEPTABLE. The rule is named Cont*propose*operator*createColumns, it will identify the rule if the debug option is in use. The rule's names follow an optional code showed in table 3.1 (in this table Rules' names code PSCM stands for Problem Space Computational Model).

Table 3.1: Rule's names code

[context][PSCM function][PSCM type][name(s)]
The object that owns the rule.proposal

comparison

selection

refinement

evaluation

testing

goal

problem-space

state

operator

The name of the PSCM object the production is about, or some other descriptive term for the object being augmented.

3.6.1 Operation

Agents operate by repeatedly running decision cycles, as illustrated in figure 3.7. In each decision cycle an object agent decides how to change the Goal list, either by changing a problem space, state or operator, or by creating a new goal in response to an impasse. A decision cycle has two parts: An elaboration and a decision phase. An elaboration phase consists of a certain number of elaboration cycles. In each elaboration cycle all the condition parts of the rules are tested. If the condition is true the rule fires immediately. When a rule fires it can change some internal variable or something outside, propose a new value for the Proposed list or add preferences for a proposed value. After all rules have been tested another elaboration cycle begins. This is done because changes made by the first wave of firing rules can trigger other rules to fire as well. It goes on until there is no firing in a cycle. The system has reached quiescence. The decision phase begins after quiescence, the agent computes all preferences for the values in the Preference list and decides how to change the Goals list. If the preferences do not specify what to do, an impasse occurs and a new goal is set up to try to solve it. This new subgoal can use a new problem space to try to solve the impasse or pass the problem on to be solved by another agent or agents.

Agent objects and Soar are different from other common cognitive architectures or AI shells in that they don't make arbitrary decisions about what to do next. There are no built in conflict resolution mechanisms or any other schemes to solve dead locks when the knowledge is insufficient or conflicting. Instead decisions are made through the application of task and search knowledge. The system's behaviour is controlled entirely by the knowledge stored in an agent's rules given by the system's programmers, not by built in assumptions. When knowledge is insufficient, the system searches the problem space to generate more knowledge about how to proceed, in such a way that any decision taken will not be arbitrary but will be based on the characteristics of the task being solved.

3.6.2 Distributed behaviour

Agent objects are well suited to distributed processing, they are small in comparison to Soar or other cognitive systems and they are C++ objects, which means they can use C libraries to communicate over a network and can be embedded in distributed applications. Another advantage of C++ is that its object oriented design helps isolate and encapsulate software, which is very good if you need to create independent agents.

The class Agent could use many different schemes of distributed reasoning, but a model based on behaviours has been implemented. The agent objects have a "personality" and an aim in life. Their personality is determined by the set of behaviours they can perform, similar to the insect-like autonomous robots, discussed in section 3.5.3.

Changes in behaviour can be dictated by an object's perception of changes in its environment, this would be similar to the mechanisms present in the interaction of individuals, such as bees. Or they can be directly commanded by another agent, similar to the more close interactions (inside individuals) present in organs or cells, where substances, such as hormones, are intentionally produced by one cell to change the way another group of other cells behave.

Environmentally triggered behaviours are implemented using the rules. They will test an external input point, and from it determine which responses are appropriate. Using insect-like robots as examples, this would be the way the walking behaviour is implemented: legs test external inputs to detect if they are touching ground or if the other legs around them are moving. Now suppose that a camera is added to this insect-like robot, this camera is able to recognise images of rubbish. The concept here is to have the robot roaming around until it stops on a piece of rubbish, at that time the two front legs should grab the rubbish and put it on top of the robot. The front legs can not recognise rubbish, since this is the job of the computer attached to the camera. The way to change their behaviour is for the camera to act directly on them and change completely their set of behaviours.

The same results could be achieved by the same mechanism used as before, but, as a whole new set of behaviours will be active, there is a more efficient way of doing it. Rules can be arranged in groups, and these groups can be activated and deactivated. Rules on inactive groups won't be tested, which improves performance. The virtual expert() function for an agent with two groups of rules would be:

void expert() {
  GROUP (WALKING)
    RULE( ... )
    RULE( ... )
    All rules concerning walking behaviour
  ENDGROUP;
  GROUP (GRABBING)
    RULE( ... )
    RULE( ... )
    All rules concerning grabbing rubbish behaviour.
  ENDGROUP;
}

3.7 Why use the class Agent reasoning model?

Why use this model of reasoning? If one takes a look at the literature about placement and routing systems, they can be divided in two groups of applications: One uses intensive search algorithms, such as Lee's algorithm [48], simulated annealing [49] or the genetic algorithm [8]. And the other group uses expert systems [1] [2] or other knowledge based approach [50]. If these two groups are put together on the graphic of figure 3.2, they will be located at two points far apart on the graph (figure 3.2). The agent object approach, when added to the graph, will be located halfway down the line connecting the two groups.

Agent objects advantage is that it is flexible enough to "slide" over the line connecting the two other groups. Because this is a domain that is search intensive, it is impossible to have rules to count for each step of the design process. Agent objects allow this intensive search to take place, thus sliding closer to the Search Group solutions, but, whenever knowledge is available, they allow embedded knowledge to reduce search, thus sliding closer to Expert Group solutions.

Another advantage is that the quality of a solution can be trimmed to the amount and kind of resources available. Quality improves whenever one can afford more searching or more knowledge is available about an application. An eventual lack of either of the two can be compensated by more of the other.

3.1 - Introduction
3.2 - Search and problems spaces
3.2.1 - The blocks world
3.3 - Problem search versus embedded knowledge
3.4 - The problem-space computational model
3.4.1 - Basics
3.4.2 - A blocks world example
3.4.3 - Selecting values
3.5 - Distributed reasoning
3.5.1 - The hive mind
3.5.2 - Defining behaviour systems
3.5.3 - Insect-like robots
3.6 - Implementation of Agent Objects
3.6.1 - Operation
3.6.2 - Distributed behaviour
3.7 - Why use the class Agent reasoning model?

Next Contents Talk to me