Attempting to instruct artificial entities, be they robots or agents in a virtual
environment, requires representing information about the actions, environment,
and agents and being able to efficiently process this data to interpret and execute
the instructions. In this paper we describe the integration of two subsystems that
form a framework for instructing agents to perform complex behaviors in complex
environments while adhering to dictated constraints. Instructions may indicate
what behaviors an agent should perform, but they can also provide constraints
on how they should be performed. A constraint might completely prohibit an
action (e.g. Donít run). Other constraints may impact the timing or priority of
actions (e.g. Do your homework before playing video games). There could also be
spatial constraints (e.g. Stay out of Room 12). Finally, constraints may include
a variety of factors that form an overall context (e.g. Do not go into a classroom
when a class is in session).
Instructed behaviors can also come in a variety of forms. There may be simple
imperatives containing a single verbs (e.g. Pickup the pencil) or complex
multi-step behaviors involving a sequence of actions or actions that should be
performed in parallel (e.g. While programming, drink coffee). Other directives
may require the agents to plan (e.g. Search the building for weapons). Finally
instructions may include standing orders that dictate how the agents should
handle scenarios they may encounter (e.g. If you see a weapon, pick it up and
move it to Room 13). All of these behaviors can take place in a rich environment
with many objects of many different types. Furthermore, these environments
are dynamic. Other characters are operating within the environment, moving
objects and changing their states and properties. Some instructions such as Do
not go into a classroom when a class is in session, certainly address this dynamic