Agents are at the heart of this software system, thus they lend their name to the program itself. The agent metaphor is employed at two levels: At a higher level, the servers Placer, Router, Broker and Database can be seen as large agents that communicate and cooperate over a network. At a lower level, inside the Router and Placer servers, small relatively simple agents work together to accomplish complex tasks.
These small agents are responsible for all the reasoning done by the Router and Placer servers, the large agents. The design philosophy is that competence should emerge out of the collective behaviour of a large number of relatively simple agents. These small agents are implemented as agent objects, the class Agent holds the basic inference routines and the derived classes add the particular knowledge needed for a particular application.
Before continuing with an explanation of the mechanisms of agent objects, it would be interesting to highlight the basic structures of cognitive systems. These structures are presented in more depth in Newell's book Unified Theories of Cognition [31], where the author reviews the foundation concepts of cognitive science and makes a case for unified theories by describing a candidate: an architecture for general cognition called Soar. As Guha and Lenat [34] define it, there are two paradigms for "software agents" today and one of them says that competence emerges from a large number of relatively simple agents integrated by some cleverly engineered architecture. In their opinion the architecture of choice for this paradigm is Soar.
Search, in this case, is not another method or cognitive mechanism, but a fundamental process for intelligent behaviour [31]. It is not one method among many that might be used to attain ends but the most fundamental process of all.
Newell [31] makes two considerations about the special role of search. One he called the existential predicament of intelligent systems: "When attempting to act, an intelligent system is uncertain. Indeed, that is of the essence of having a problem - it is not known what to do next". The system must then search for a solution and that search tends to become combinatorial because new errors are committed before old ones are detected and resolved. A search will occur, whatever method is used to solve the problem, and the more problematic the situation the more extensive the search will be.
The second consideration he makes is called the court of last resort: "If an intelligent agent wishes to attain some end - to find some object X say - it must formulate this task in some way. It must arrange its activities in some way rationally related to finding X. It will do so - indeed it can only do so - on the basis of available knowledge. Even to seek more knowledge before acting is already to chose a particular way to formulate the task. The more problematical a situation, the less knowledge is available that says how to find X - that's what means to be problematical". The formulation of the task that makes the least demands on specific knowledge is then:
"Formulation of last resort: If it's not known how to obtain X, then create a space to contain X and search that space for X".
A space, in this case, is the set of all the possible solutions for a problem. This formulation can always be used. A space can always be found that contains the desired solution, assuming that a solution does exist. The less knowledge that is available, the larger this space has to be, and the more difficult and expensive will be the search.
This formulation and the corresponding method for working with it, is usually called generation and test. Newell writes that "All of the methods used in artificial intelligence are at bottom search methods, built up as further specifications on generate and test. Means-ends analysis, hill climbing, progressive deepening, constraint propagation - all are search methods of one kind or another. And all build on generate and test".
It can be said that an intelligent system is always operating within a problem space. This space is created by an intelligent agent to search for a solution to any problem it is currently attending, it is the agent's attempt to bound the problem so it becomes workable. The agent adopts a problem space to solve a problem and inside this problem space it can set up sub-spaces. Inside these spaces, the agent is located in some state and it applies a set of operators to find new states. The agent undertakes this search process until its goals are fulfilled.
When forced to respond to some stimuli a system can deliberate - engage in activities to analyse the situation and the possible responses. This will lead to a search for an appropriate response in some space. Or the system can also have various responses or aspects of responses already prepared and stored. To use such preparations the system must access memory, retrieve them, and adapt them as appropriate to each case. In general, each specific situation calls for some mix of deliberation and preparation. Deliberation will demand search and preparation embedded knowledge.
Based on how much a system relies on search or embedded knowledge, Newell proposes the graph in figure 3.2, which depicts a space with deliberation and preparedness as dimensions. Particular classes of systems can then be located at particular places on the graphic:
Figure 3.3 shows the flowchart of a problem space. Before a problem space can begin work, the initial state and knowledge of the goal must be available. These are set by Formulate
task
. Once the problem space, goal and initial state are known, Select operator
chooses an operator to apply to the current state and Apply
operator
applies it to the current state to produce a new one. Terminate task
then checks to see if the new state is the goal one or if success is not possible. If it returns true, execution is halted, otherwise control go back to Select operator
.
A problem space must have knowledge to implement the functions in figure 3.3. For example it should know how to propose and chose operators. When a problem space does not have the knowledge to implement one function, an impasse occurs, no further problem solving can be undertaken in this space until knowledge is generated to solve this impasse. There are four types of possible impasses:
Formulate
task
, figure 3.3, to select and initialise the new problem space. The original problem space, where the impasse occurred, is responsible for supplying the knowledge to implement Formulate
task
. If this knowledge is unavailable, a new impasse occurs and a new subspace is created to search for this knowledge.Impasses can occur in any problem space, forming a goal/subgoal hierarchy with spaces and subspaces in one or multiple agents. The top most space represents the agent's primary goal.
Blocks world
, its goal is the global goal. When processing begins, in the initial state S1, the operator Move C to Table
is the only one proposed by Select
operator
(it is the only possible legal move) and it is then chosen. Apply
operator
then applies this operator to S1 to produce S2. Terminate
Task
then decides that the goal has not been reached and the system goes back to Select
Operator
. Now two operators are proposed to S2: Move B to Table
and Move
B
to
C
. As the system doesn't have any knowledge to decide between the two, there is an operator tie impasse.
A subgoal is formulated to acquire knowledge to break the impasse. The system creates a new subspace, called Selection
, to achieve the subgoal. Selection
knows how to do a lookahead search to find which of two or more operators does work: it evaluates each one to see which will lead to the goal state. The operator Move
B
to
C
is tried first (any operator could have been the first), the system proposes and selects the operator Evaluate
: Move
B
to
C
. As the Selection
space has no directly available knowledge about how to apply the operator, an operator no-change impasse arises. A new subgoal is set up to break this new impasse. The Search
problem space is created. Search
space knows how to evaluate operators: it creates a copy of the Blocks
world
space, applies the relevant operator, in this case Move
B
to
C
, and continues the problem solving until the result of applying the operator is known, in this case, until it knows if the goal state can be reached or not. After applying Move
B
to
C
to S2' to produce the state S3', the operator Move
A
to
B
is proposed and applied (it is the only legal move) and the goal state is produced.
As the lookahead search shows that Move
B
to
A
leads to the desired state, Selection
space indicates that this is the best operator to chose in the context of the original problem. Select
space selects Move
B
to
A
over Move
B
to
Table
and the impasse is resolved. As in the Blocks world
problem space the goal state has yet not been reached, the operator Move
A
to
B
is proposed and applied (again the only possible legal move). Terminate
task
detects that the state S4 is the goal state and execution is halted.
Philosophers, like Descartes, believed in a central place or focal point in the brain where all the senses would come together. For some, that would be the point of interaction between mind and brain, the point where the ghost touches the machine. This concept of a place where the conscious experience takes place, the Cartesian Theatre, would suggest a centralized model for intelligence. One that is in tune with our common sense.
But this particular region in the brain, the Cartesian Theatre, has not been found yet. Indeed, studies on the visual cortex have not found, so far, one particular region in the brain where all the information needed for visual awareness appears to come together [40].
A distributed intelligence model would not only solve some important "implementation" problems, like speed, but would fit better with results from recent mind studies [41][42]. In Consciousness Explained, Dennett [43] proposes such a model, the Multiple Draft model. It asserts that all varieties of perception - indeed all varieties of thought or mental activity - are accomplished in the brain by parallel multitrack processes of interpretation and elaboration of sensorial inputs. Information entering the brain is continually being edited.
When bees need to relocate a colony, they have a search problem to solve and they use a very interesting distributed mechanism [37]. They form a swarm and pour themselves out into the open. During these events the queen bee is not in command, she merely follows the flow of events. Some scout bees are sent ahead of the swarm checking possible hive locations. They report back to the swarm dancing near the swarm's surface. During this report the more enthusiastically a bee dances, the more other bees will be compelled to visit the reported site. The bees will inspect those sites whose scout's dance they liked most.
When each of these bees returns from its inspection, it supports the site by joining the scout that is dancing for that site. That induces more followers to check out the leading sites and joining in, when they return, the performance of their choice. Few bees, apart from the scouts, visit more than one site. Gradually one large finale will dominate the dance-off. The biggest crowd wins.
Kelly [37] writes "It's an election hall of idiots, for idiots, and by idiots, and it works marvellously". The swarm, as an ant colony, behaves more like an individual than a group, but the bees are probably unaware of the swarm. They have a set of simpler individual behaviours that add up to very complex group behaviours. The whole is far smarter than its parts.
To realize a behaviour, there must be some sort of mechanism in the agent. This mechanism should be implemented using different components and a control program. The observed behaviours are due to the interaction between the operation of the mechanism and the environment the agent is experiencing. A behaviour system is then defined as a collection of components responsible for realising a particular behaviour.
Using this model, small robots can be built that can show quite interesting behaviours while using few hardware or software resources. Among these robots there is a group of small, six legged ones called insect-like robots.
Walking in Genghis emerges out of the collective behaviour of its legs. Two motors in each leg lift, or not, depending on what the other legs around them are doing. If the motors activate in the correct order, walking happens. Walking is not governed by any particular processor, there is no smart central controller. Brooks called it "bottom-up control" [38][39]. If you snip off one leg it will shift gaits with the other five without losing a stride, this is an immediate self-reorganization.
Genghis legs have few simple behaviours and each independently knows what to do under various circumstances. For instance, two basic behaviours can be thought as "If I am a leg and I'm up, put myself down," or "If I am a leg and another leg just went forward, I should go back a little". These processes exist independently, run at all times and fire whenever the sensory preconditions are true. To create walking then, there just needs to be a sequence of lift legs. As soon as a leg is raised it automatically swings itself forward, and also down. But the act of swinging forward triggers all the other legs to move back a little. Since those legs are touching the floor, Genghis moves forward.
Once Genghis can walk over a flat surface, other behaviours can be added to improve its walk, such as climbing over a small obstacle. These new behaviours are added on top of the existing ones. The behaviours are organized following the subsumation architecture [46], shown on figure 3.6. The subsumation architecture divides the control architecture into task achieving modules or behaviours. Instead of dividing the problem into sequential functional modules, the problem is sliced into layers of behaviours (fig. 3.6), each layer forming a competence level of a control system [47]. The main idea is that layers corresponding to different levels of competence can be built and added on top of each other, each new layer adding a new level of overall competence to the system.
The behaviours in a lower layer are unaware of any other behaviour belonging to a layer higher than theirs. When a behaviour in a higher layer wishes to take control, it can subsume the role of lower levels, inhibiting them (inhibition line in figure 3.6). New behaviours will overpower others, and thus get expressed, only on those situation where their action will improve performance or initiate a newly added response, otherwise the old behaviours will do business as usual, which means compete to get expressed. This system is easily extensible as new behaviours just add some functionality to an already working system.
Genghis is an example of how an artificial behaviour system can work, some of its ideas will be explored in the implementation of distributed behaviour of agent objects.
Figure 3.6 shows the architecture of a C++ object derived from the class Agent, in this case this object is controlling a robot in a blocks world. The Goals list contains the current hierarchy of problems spaces, organized in a goal context stack. Each goal context contains a goal, the problem space being used to search for that goal, the state slot of the program space, and the operator currently being applied. The Preference list contains values proposed by the rules with their respective preferences. Internal variables are any kind of variables or objects held by a particular derived agent. The in and out triangles represent accesses to other objects or variables outside the agent object.
Task and search control knowledge are encoded as production rules in permanent memory. These rules test the state of the Goals list, internal variables and the outside world and when fired, they can act on the internal variables or on the outside world or produce preferences for changing the Goals list elements. The production conditions are C++ language's if statements, they can have any kind of statement allowed by C++, including function calls. Matching routines are not supplied by the class Agent, since there aren't facilities to match templates against working memory elements, such as in Soar or OPS5 [43]. In the rules' condition section, objects derived from the class Agent have to perform the comparisons themselves or rely on the object being tested to supply some form of matching method. For instance, list objects have methods to match templates against their contents.
Objects on the Goals list are represented as slots. A slot is a list where the first element is the slot's identifier and the others are slot values. All slots have, at least, an identifier and they can have any number of values. Each element in the Goals list is a list representing one goal and a problem space. The last list represents the top goal:
( ( (NAME GOAL_11) (PROBLEM_SPACE ( (NAME BLOCKS) ... ) ) (STATE ( (NAME FIRST) (TABLE OK) ... ) ) (OPERATOR ( (NAME MOVETO) (POSX 5) (POS_Y 7) ... ) ) ) ( (NAME GOAL_10) ... ) ... )
When there is an impasse a new goal is automatically created in the Goals list with data about the impasse. The Goals list should not be directly modified by the rules action, rules should instead propose values or add preferences to the Propose list. The result of these preference judgements should determine changes on the Goals list. However to enforce this prohibition in C++ would be very difficult and costly. If the user wants, he can override this rule.
The class Agent holds the basic inference routines, but the derived classes should add the knowledge, in the form of rules, specific to a particular application. They do that using the virtual method expert()
. Derived classes redefine this method and define their rules on it. The class Agent then uses the method to apply the rules, because this is a virtual method, the class does not need prior knowledge about the rule themselves. The following is an example of a simple rule:
RULE( "Cont*propose*operator*createColumns", isGoal(CREATE_COLUMNS, G1) && isState(G1, CREATE_COLUMNS_1) ) { SET_SLOT( G1, OPERATOR, new_LIST(new_LIST(NAME, CREATE), new_LIST(POS, 3, 2)), ACCEPTABLE); }This rule just tests if there is a goal called
CREATE_COLUMNS
and if this goal
has a state called CREATE_COLUMNS_1
. If yes it proposes a new value for the
operator slot of the goal, this operator is named CREATE
and has a position
slot named POS
with two values 3
and 2
. The
preference for this value is ACCEPTABLE
. The rule is named
Cont*propose*operator*createColumns
, it will identify the rule if the debug
option is in use. The rule's names follow an optional code showed in table
3.1 (in this table Rules' names code PSCM stands for Problem Space
Computational Model).
[context] | [PSCM function] | [PSCM type] | [name(s)] |
---|---|---|---|
The object that owns the rule. | proposal comparison selection refinement evaluation testing | goal problem-space state operator | The name of the PSCM object the production is about, or some other descriptive term for the object being augmented. |
Agent objects and Soar are different from other common cognitive architectures or AI shells in that they don't make arbitrary decisions about what to do next. There are no built in conflict resolution mechanisms or any other schemes to solve dead locks when the knowledge is insufficient or conflicting. Instead decisions are made through the application of task and search knowledge. The system's behaviour is controlled entirely by the knowledge stored in an agent's rules given by the system's programmers, not by built in assumptions. When knowledge is insufficient, the system searches the problem space to generate more knowledge about how to proceed, in such a way that any decision taken will not be arbitrary but will be based on the characteristics of the task being solved.
The class Agent could use many different schemes of distributed reasoning, but a model based on behaviours has been implemented. The agent objects have a "personality" and an aim in life. Their personality is determined by the set of behaviours they can perform, similar to the insect-like autonomous robots, discussed in section 3.5.3.
Changes in behaviour can be dictated by an object's perception of changes in its environment, this would be similar to the mechanisms present in the interaction of individuals, such as bees. Or they can be directly commanded by another agent, similar to the more close interactions (inside individuals) present in organs or cells, where substances, such as hormones, are intentionally produced by one cell to change the way another group of other cells behave.
Environmentally triggered behaviours are implemented using the rules. They will test an external input point, and from it determine which responses are appropriate. Using insect-like robots as examples, this would be the way the walking behaviour is implemented: legs test external inputs to detect if they are touching ground or if the other legs around them are moving. Now suppose that a camera is added to this insect-like robot, this camera is able to recognise images of rubbish. The concept here is to have the robot roaming around until it stops on a piece of rubbish, at that time the two front legs should grab the rubbish and put it on top of the robot. The front legs can not recognise rubbish, since this is the job of the computer attached to the camera. The way to change their behaviour is for the camera to act directly on them and change completely their set of behaviours.
The same results could be achieved by the same mechanism used as before, but, as a whole new set of behaviours will be active, there is a more efficient way of doing it. Rules can be arranged in groups, and these groups can be activated and deactivated. Rules on inactive groups won't be tested, which improves performance. The virtual expert()
function for an agent with two groups of rules would be:
void expert() { GROUP (WALKING) RULE( ... ) RULE( ... ) All rules concerning walking behaviour ENDGROUP; GROUP (GRABBING) RULE( ... ) RULE( ... ) All rules concerning grabbing rubbish behaviour. ENDGROUP; }
Agent objects advantage is that it is flexible enough to "slide" over the line connecting the two other groups. Because this is a domain that is search intensive, it is impossible to have rules to count for each step of the design process. Agent objects allow this intensive search to take place, thus sliding closer to the Search Group solutions, but, whenever knowledge is available, they allow embedded knowledge to reduce search, thus sliding closer to Expert Group solutions.
Another advantage is that the quality of a solution can be trimmed to the amount and kind of resources available. Quality improves whenever one can afford more searching or more knowledge is available about an application. An eventual lack of either of the two can be compensated by more of the other.
Next Contents Talk to me