Artificial Intelligence Questions
Intelligent Agent
www.funFruit.com
Intelligent Agent
What is Agent?
Agents and Environments
Rational Agents
PEAS Examples:
Agent Types
Table Driven Agents
Reflex agents
Goal-based agents
Utility-based agents
Learning agents
Environment Types
www.funFruit.com
Intelligent Agent
An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
www.funFruit.com
Intelligent Agent
Agents and environments An agent perceives its environment through
sensors.
The complete set of inputs at a given time is
called a percept.
The current percept, or a sequence of
percepts can influence the actions of an agent. The agent can change the
environment through actuators or
effectors.
An operation involving an effectors is
called an action.
Actions can be grouped into action
sequences.
5
www.funFruit.com
Intelligent Agent
Agents
Human agent:
eyes, ears, and other organs for sensors;
hands, legs, mouth, and other body parts for actuators
Robotic agent:
• cameras and infrared range finders for sensors;
• various motors for actuators..
• Example AIBO entertainment robot from SONY
A software agent:
Keystrokes, file contents, received network packages as sensors
Displays on the screen, files, sent network packets as actuators
6
www.funFruit.com
Intelligent Agent
Agent Example
A simple example of an agent in a physical environment is a thermostat for a
heater.
The thermostat receives input from a sensor, which is
embedded in the environment, to detect the temperature.
Two states:
temperature too cold
temperature OK are possible.
The first action has the effect of raising the room temperature, but this is not
guaranteed. If cold air continuously comes into the room, the added heat may
not have the desired effect of raising the room temperature.
Each state has an associated action:
too cold turn the heating on
temperature OK turn the heating off.
7
www.funFruit.com
Intelligent Agent
Agent function and program
Agent function
An agent is completely specified by the agent function
[f: P* -> A]
mapping percept sequences to actions
Agent program
The agent program runs on the physical architecture to produce f
AGENT = ARCHITECTURE + PROGRAM
.
8
www.funFruit.com
Intelligent Agent
Vacuum-cleaner world
Percepts: location and state of the environment, e.g., [A,Dirty], [B,Clean]
Actions: Left, Right,
Suck, NoOp
9
www.funFruit.com
Intelligent Agent
Agent Performance
How do you define success? Need a performance measure
How the agent does successfully
E.g., 90% or 30% ?
Eg. reward agent with one point for each clean square at each time step (could penalize for costs and noise)
The success can be measured in various ways.
It can be measured in terms of speed or efficiency of the agent.
It can be measured by the accuracy or the quality of the solutions achieved
by the agent
It can also be measured by power usage, money, etc
10
www.funFruit.com
Intelligent Agent
Rational agents
An Intelligent Agent must sense, must act, must be autonomous (to
some extent). It also must be rational.
AI is about building rational agents
A rational agent always does the right thing.
based on what it can perceive and the actions it can perform.
Rationality depends on 4 things:
1. Performance measure of success 2. Agent’s prior knowledge of environment 3. Actions agent can perform 4. Agent’s percept sequence to date
12
www.funFruit.com
Intelligent Agent
Rational agents(cont.)
Performance measure is a criterion for success of an agent's behavior.
E.g., performance measure of a vacuum-cleaner agent could be amount of
dirt cleaned up, amount of time taken, amount of electricity consumed,
amount of noise generated, etc.
As a general rule,
it is better to design performance measures according to what one actually wants in
the environment.
Rather than according to how one thinks the agent should behave (amount of dirt
cleaned vs a clean floor)
13
www.funFruit.com
Intelligent Agent
Learning
Does a rational agent depend on only current percept?
No, the past percept sequence should also be used
This is called learning
After experiencing an episode, the agent
should adjust its behaviors to perform better for the same job next time.
www.funFruit.com
Intelligent Agent
Autonomy
A rational agent should be autonomous- it should learn what it can to compensate for partial or incorrect prior knowledge.
If an agent just relies on the prior knowledge of its designer rather than its own percepts then the agent lacks autonomy
E.g., a clock
No input (percepts)
Run only but its own algorithm (prior knowledge)
No learning, no experience, etc.
www.funFruit.com
Intelligent Agent
Omniscience
Rationality is not same as omniscience.
A rational agent is not omniscient. It does not know the actual outcome of
its actions, and it may not know certain aspects of its environment.
Rationality requires agent to learn as much as possible.
The rational agent has too select the best action to the best of its knowledge
depending on its percept sequence, its background knowledge and its
feasible actions.
16
www.funFruit.com
Intelligent Agent
A rational agent chooses
whichever action maximizes
the expected value of the
performance measure given
the percept sequence to date
www.funFruit.com
Intelligent Agent
Exercise
18
What are the salient features of an agent ?
www.funFruit.com
Intelligent Agent
Task Environment
PEAS
Performance measure
Environment
Actuators
Sensors
20
www.funFruit.com
Intelligent Agent
PEAS for an automated taxi driver
Example: Agent = taxi driver Performance measure: Safe, fast, legal, comfortable trip, maximize
profits
Environment: Roads, other traffic, pedestrians, customers
Actuators: Steering wheel, accelerator, brake, signal, horn
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
21
www.funFruit.com
Intelligent Agent
PEAS for Medical diagnosis System
Example: Agent = Medical diagnosis system
Performance measure: Healthy patient, minimize costs, lawsuit
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings, patient's answers)
22
www.funFruit.com
Intelligent Agent
PEAS for Part Picking Robot
Example: Agent = Part-picking robot
Performance measure: Percentage of parts in correct bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
23
www.funFruit.com
Intelligent Agent
PEAS for Robot Soccer Player
Robot Soccer Player
Performance measure: ?
Environment - ?
Actuators - ?
Sensors - ?
24
www.funFruit.com
Intelligent Agent
PEAS for Interactive Mathematics tutor
agent Interactive Mathematics tutor agent
Performance measure: ?
Environment - ?
Actuators - ?
Sensors - ?
25
www.funFruit.com
Intelligent Agent
PEAS for Internet book shopping agent
Internet book shopping agent
Performance measure: ?
Environment - ?
Actuators - ?
Sensors - ?
26
www.funFruit.com
Intelligent Agent
Exercise
27
Write a PEAS for ATM System
www.funFruit.com
Intelligent Agent SE 420, Lecture 2
Interacting Agents
Collision Avoidance Agent (CAA) • Goals: Avoid running into obstacles • Percepts ? • Sensors? • Effectors ? • Actions ? • Environment: Freeway
Lane Keeping Agent (LKA)
• Goals: Stay in current lane
• Percepts ?
• Sensors?
• Effectors ?
• Actions ?
• Environment: Freeway
www.funFruit.com
Intelligent Agent SE 420, Lecture 2
Interacting Agents
Collision Avoidance Agent (CAA) • Goals: Avoid running into obstacles • Percepts: Obstacle distance, velocity, trajectory • Sensors: Vision, proximity sensing • Effectors: Steering Wheel, Accelerator, Brakes, Horn,
Headlights • Actions: Steer, speed up, brake, blow horn, signal (headlights) • Environment: Freeway
Lane Keeping Agent (LKA)
• Goals: Stay in current lane
• Percepts: Lane center, lane boundaries
• Sensors: Vision
• Effectors: Steering Wheel, Accelerator, Brakes
• Actions: Steer, speed up, brake
• Environment: Freeway
www.funFruit.com
Intelligent Agent SE 420, Lecture 2
Conflict Resolution by Action Selection Agents
Override: CAA overrides LKA
Arbitrate: if Obstacle is Close then CAA else LKA
Compromise: Choose action that satisfies both agents
Any combination of the above
Challenges: Doing the right thing
www.funFruit.com
Intelligent Agent
Agent programs
Input for Agent Program
Only the current percept
Input for Agent Function
The entire percept sequence
The agent must remember all of them
Implement the agent program as
A look up table (agent function)
www.funFruit.com
Intelligent Agent
Agent types
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents
33
www.funFruit.com
Intelligent Agent
Simple reflex agents
The agent works by finding a rule whose condition matches the current situation, as defined by perception, and then doing the action associated with the rule.
The agent has no memory.
Simple Reflex Agent should has condition-action pairs defining all possible condition-action rules necessary to interact in an environment
www.funFruit.com
Intelligent Agent
Problems
Table is too big to generate and to store (e.g. taxi)
Takes long time to build the table
Not adaptive to changes in the environment; requires entire table to be updated if changes occur.
www.funFruit.com
Intelligent Agent
The agent should keep track of the part of the world it can't see
now. Thus agent should maintain some sort of internal state that
depends on the percept history
Updating the internal state information requires two kinds of
knowledge to be encoded in the agent program
Information about how the world evolves independently of
the agent
Information about how the agent's own actions affects the
world
www.funFruit.com
Intelligent Agent
information comes from sensors – percepts
Integrate percept in the state.
State evaluate the conditions- action rule in the state.
based on this, the agent choses the Action
Execute action
Update state with action
Thus a state based agent works as follows:
www.funFruit.com
Intelligent Agent
The current state of the world is always not enough to decide what to do.
It needs to add goals to decide which situations are good.
Example : car at junction ….. Need knowledge of destination
Wo r l d m o d e l ( a s m o d e l b a s e d a g e n t ) + G o a l s
A goal is a description of a desirable situation
The goal based agent act by reasoning
about which actions achieve the goal
Example
www.funFruit.com
Intelligent Agent
Goal-based agents
Goal based agents work as follows:
information comes from
sensors – percepts
changes the agents current
state of the world
based on state of the world
and knowledge (memory)
and goals/intentions, it
chooses actions and does them
through the effectors
www.funFruit.com
Intelligent Agent
What if there are many paths to the goal?
Utility measures which states are preferable to other state
Maps state to real number (utility or “happiness”)
Examples
www.funFruit.com
Intelligent Agent
Learning Agent
Learning allows an agent to operate in initially unknown environments. The learning element modifies the performance element. Learning element is responsible for making improvements Performance element is responsible for selecting external actions (it is what we had defined as the entire agent before) Learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future Problem generator is responsible for suggesting actions that will lead to a new and informative experiences
www.funFruit.com
Intelligent Agent
Environment types
Fully observable vs. partially observable
Deterministic vs. stochastic
Episodic vs. sequential
Static vs. Dynamic
Discrete vs. Continuous
Single agent vs. multi-agent
43
www.funFruit.com
Intelligent Agent
Environment types
Fully observable vs. partially observable
A task environment is fully observable if the sensors detect all
aspects that are relevant to the choice of action & agent can obtain
complete, timely and accurate information about the state of the
environment.
An environment might be partially observable because of noisy
and inaccurate sensors or because parts of the state are simply
missing from the sensor data.
A local dirt sensor of the cleaner cannot tell Whether other squares are clean or not
44
www.funFruit.com
Intelligent Agent
Environment types
Fully observable vs. partially observable
45
www.funFruit.com
Intelligent Agent
Environment types
The environment is deterministic if the next state of the environment
is completely determined by the current state and the action executed
by the agent.
If the environment has an element of uncertainty then the environment
is stochastic
ATM system is deterministic or stochastic ?
Deterministic vs. stochastic
46
www.funFruit.com
Intelligent Agent
Environment types
Deterministic vs. stochastic
47
www.funFruit.com
Intelligent Agent
Environment types
In an episodic environment, the actions of an agent depend on a number of
discrete episodes with no link between the performance of the agent in
different scenarios.
Episodic vs. sequential
48
This environment is simpler to design
since only the current environment
needs to be considered.
Examples: Part picking robot
In sequential environments, the current
decision could affect all future decisions
Examples: chess and taxi driver Classify environment of Class
room teaching
www.funFruit.com
Intelligent Agent
Environment types
Episodic vs. sequential
49
www.funFruit.com
Intelligent Agent
Static Environment: does not change from one state to the next while the agent is considering its course of action. The only changes to the environment are those caused by the agent itself. A static environment does not change while the agent is thinking. The passage of time as an agent deliberates is irrelevant. The agent doesn’t need to observe the world during deliberation Examples crossword puzzles is static
A Dynamic Environment changes over time independent of the actions of the agent and thus if an agent does not respond in a timely manner, this counts as a choice to do nothing. Examples: taxi driving is dynamic The environment is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does Examples: chess when played with a clock is semi-dynamic.
Static vs. Dynamic
www.funFruit.com
Intelligent Agent
Static vs. Dynamic
NO
www.funFruit.com
Intelligent Agent
Environment types
A limited number of distinct, clearly defined states, percepts and
actions. OR A discrete environment is one where you have finitely
many action choices, and finitely many things you can sense.
Examples:
Chess has finite number of discrete states, and has discrete set
of percepts and actions.
Taxi driving has continuous states, and
actions.
Discrete vs. Continuous
52
www.funFruit.com
Intelligent Agent
Environment types
Discrete vs. Continuous
53
www.funFruit.com
Intelligent Agent
Environment types
An agent operating by itself in an environment is single agent
Examples: Crossword is a single agent while chess is two-agents
Question: Does an agent A have to treat an object B as an agent or can it be treated as a stochastically behaving object .
Whether B's behavior is best described by as maximizing a performance measure whose value depends on agent's A behavior.
Examples: chess is a competitive multi-agent environment while taxi driving is a partially cooperative multi-agent environment
Single agent vs. multi-agent
54
www.funFruit.com
Intelligent Agent
Environment types
Single agent vs. multi-agent
55
www.funFruit.com
Intelligent Agent
Summary
In conclusion AI is a truly fascinating field. It deals with exciting but hard
problems. A goal of AI is to build intelligent agents that act so as to
optimize performance.
An agent perceives and acts in an environment, has an architecture, and is
implemented by an agent program.
An ideal agent(rational) always chooses the action which maximizes its
expected performance, given its percept sequence so far.
An autonomous agent uses its own experience along with built-in
knowledge of the environment by the designer.
www.funFruit.com
Intelligent Agent
SUMMARY
Environment Types: Environments are observable , deterministic,
episodic, dynamic, and continuous.
PEAS(Performance measure, Environment, Actuators, Sensors)
Agent Types
Table Driven Agents use a percept-action table in memory to find
the next action.
Reflex agents respond immediately to percepts.
Goal-based agents act in order to achieve their goals.
Utility-based agents maximize their own utility function.
Learning agents improve their performance over time
www.funFruit.com
Intelligent Agent
http://tania.blogs.com/.a/6a00d8341c73bb53ef010536b6cbae970b-pi