Intelligent agents perceive it from the environment via sensors and acts rationally on that environment via effectors.
As human has ears, eyes, and other organs for sensors, and hands, legs and other body parts for effectors. Similarly, the robot agent has a camera, mic as sensors and motors for effectors.
Agents interact with the environment through sensors and effectors.
Properties of agent
- Interacts with other agents plus the environment
- Reactive to the environment
- Pro-active (goal-directed)
A rational agent will maximize its performance measure based on the percept sequence, and its built-in and acquired knowledge. The performance measure is how well the task has been achieved.
An omniscience agent is familiar with the actual result of its actions and can act accordingly.
A system is self-directed to the extent that its behavior is determined by its own knowledge. Autonomous is related to adaptability. A truly autonomous intelligent agent should be able to operate in a wide variety of environments.
Agent = Architecture + Program
- Simple reflex agents
- Agents that keep track of the world
- Goal-based agents
- Utility-based agents
Simple reflex agents
A simple reflex agent works by rules whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule.
Agents that keep track of the world
The agent may need to maintain some internal state to remember the past as contained in the earlier percept.
The goal-based agent is flexible with respect to reaching different destinations. Simply by recognizing a new destination, we can get the goal-based agent to come up with new behavior.
In utility-based agents, when there are multiple possible alternatives for a goal, how to decide which one is best?
The alternative that will be quicker, safer, more reliable, or cheaper than other alternatives, the distinction is between a happy and unhappy state.
The environmental Properties
- Accessible vs. Inaccessible.
- Deterministic vs. Non-Deterministic.
- Episodic vs. Nonepisodic.
- Static vs. Dynamic.
- Discrete vs. Continuous.
Accessible vs. Inaccessible
The environment is accessible if the sensors detect all aspects that are relevant to the choice of action. While On the other side, if the sensor doesn’t detect all aspects of the environment, then we say that the environment is inaccessible to that agent.
Deterministic vs. Nondeterministic
The next state of the environment is entirely determined by the current state and the actions selected by the agents, then we say the environment is deterministic. While on the other hand if the environment is not determinable with the current state and actions of the agent, then we say that the environment is nondeterministic.
Episodic vs. Nonepisodic
In Episodic, the agent’s experience is divided into “episodes.” Each episode consists of the agent perceiving and then acting. Episodic environments are much easier because the agent does not need to think ahead. While Nonepisodic environments are much complex for an agent to perceive and act. In a nonepisodic environment, the agent needs to think ahead.
Static vs. Dynamic
If the environment does not change while an agent is thinking, then we say the environment is static for that agent. Static environments are easy to deal with because the agent needs not to keep looking at the world while it is deciding on an action with the passage of time. On the other side, If the environment can change while an agent is thinking, then we say the environment is dynamic for that agent. Dynamic environments are difficult to deal with because the agent needs to look at the world (current state of the environment) while doing any action with the passage of time.
Discrete vs. Continuous
If there are a limited number of distinct, clearly defined percepts and actions we say that the environment is discrete. Chess is discrete, there are a fixed number of possible moves on each turn. For an agent, discrete environments are easy to deal with because it gives very clear and well-defined percepts of action.
The self-driving car is continuous—the speed and location of the car and the other vehicles sweep through a range of continuous values. Continuous environments are complex to deal with because it does not give clear percepts of actions for an agent. The agent needs to keep track of the world.