Introduction to Artificial Intelligence
·
Artificial intelligence (A .I) refers to the
simulation of human intelligence in machines that are programmed to think and
act like humans.
·
It involves the development of algorithms and computer
programs that can perform tasks that typically require human intelligence such
as visual perception, speech recognition, decision-making and language
translation.
·
The focus of
artificial intelligence is towards understanding human behavior and
performance.
·
This can be done by creating computers with human-
like intelligence and capabilities.
·
This include natural language processing, facial
analysis and robotics.
Application of Artificial Intelligence
1. Artificial intelligence in E-Commerce
·
Artificial intelligence helps to make appropriate suggestions and
recommendations as per the user search history and view preferences.
·
There are also AI chatbots that are used to provide customer support
instantly and help to reduce complaints and queries to a great extent.
2.
Artificial
intelligence in Education Purpose
·
It helps the faculty as well as the students by making
course recommendations, analyzing some data and some decisions about the
student etc.
·
Making automated messages to the student and parents
regarding any vacation and test results are done by artificial intelligence
these days.
3.
Healthcare
· Artificial
intelligence uses the medical history and current situation of a particular
human being to predict future diseases.
· Artificial
intelligence is also used to find the current vacant beds in the hospitals of a
city that saves the time of patients who are in emergency conditions.
4.
Automobiles
·
AI is used to detect the traffic on the street and
provide the best route out of the present all routes to the driver.
·
It uses sensors, GPS technology and control signals to
bring the vehicle the best path.
5.
Surveillance
·
Artificial intelligence is also used in the field of
surveillance by recognizing far faces and objects.
·
Then the event recognizing capabilities are used to
enhance these faces and objects. This helps the military to protect their areas
and prevent any attack in real time.
Turing Test
·
Turing Test was introduced by Alan Turing in 1950.
·
Turing Test is used to check weather a machine can
think like a human or not.
·
The Turing Test requires three terminals, each of
which is physically separated from the other two.
·
One terminal is operated by a computer, while the
other two are operated by humans.
·
During the test, one of the humans functions as the
questioner, while the second human and the computer function as respondents.
·
|
·
Interrogator:
Are you a computer?
·
Player A
(Computer): No
·
Player B
(Human): No
·
Interrogator:
Convert the decimal 45952 into binary.
·
Player A: Long pause
and give the wrong answer.
·
Player B: Long pause
and give the wrong answer
In this game, If an interrogator would not be able to
identify which is a machine and which is a human, then the computer passes the
test successfully, and the machine is said to be intelligent and can think like
a human.
Rational
Agent approaches to AI:- The rational agent considers all possibilities and
chooses and perform a highly efficient actions.
For example – it chooses the shortest path with low cost for
high efficiency. PEAS stands for a Performance measure, Environment, Actuator
and sensors.
1. Performance Measure:-
Performance measure is the unit to define the success of an agent. Performance
varies with agents based on their different precepts.
2. Environment:- Environment is the surrounding of
an agent at easy instant. It keeps changing with time if the agent is set in
motion.
There are
five types of environments:
·
Fully observable & Partially observable
·
Episodic & Sequential
·
Static & Dynamic
·
Discrete & Continuous
·
Deterministic & Stochastic
3. Actuator:- An actuator is a part of the agent
that delivers the output of action to the environment.
4. Sensor:- Sensors are the receptive parts of
an agent that takes in the output for the agent.
Intelligence Agent & Its Structure
1.
Agents:
An Agent can be anything that
perceive its environment through sensors and act upon that environment through
actuators. An agent runs in the cycle of perceiving, thinking and acting.
2.
Basic Terminology:
·
Sensor: Sensor is a
device which detects the change in the environment and sends the information to
other electronic devices. An agent observes its environment through sensors.
·
Actuators: Actuators are the component of
machines that converts energy in to motion. The actuators are only responsible
for moving and controlling a system. An actuators can be an electric motor,
gears, rails etc.
·
Effectors: Effectors
are the devices which affect the environment. Effectors can be legs, wheels,
arms, fingers, wings, fins and display screen.
3.
Intellengent
Agent
An intelligent
agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an intelligent agent.
Following are the main four rules for an AI agent:
·
Rule 1 :- An AI agent must have the
ability to perceive the environment.
·
Rule 2:- The observation must be
used to make decisions.
·
Rule 3 :- Decision should result in
an action.
·
Rule 4 :- The action taken by an
agent must be a rational action.
4.
Structure of AI Agent
To understand the structure of
Intelligent Agents, We should be familiar with architecture and agent programs.
·
Architecture
is the machinery that the agent executes on. It is a device with sensors
and actuators, for example – A Robotic car, A camera and A PC.
·
Agent
function is used to
map a perfect sequence to an action.
·
An Agent
Program is an implementation of an agent function.
Agent = Architecture + Agent Program
5.
Types of AI
Agents:
1.
Simple
Reflex Agent:-
·
The simple reflex agents are the simplest agents.
These agents take decision on the basis of the current percepts and ignore the
rest of the percepts history.
·
These agents only succeed in the fully observable environment.
·
The simple reflex agent does not consider any part of
percepts history during their decision and action process.
·
The simple reflex agent works on condition action
rule. Which means it maps the current state to action. Such as a Room Cleaner
Agent, It works only if there is dirt in the room.
·
For example – A thermostat turns on the heater on or
off based on the current temperature.
2.
Model-based
reflex agent:-
·
The model based agent can work in a partially
observable environment and track the situation.
·
It maintains an internal state that keeps track of the
part history.
·
It uses “ A model of the world” to handle more complex environments .
·
It consider updating their internal state based on how
their action change the environment.
·
For example – A Robot vacuum which remembers where it
has already cleaned to avoid redundant cleaning.
3.
Goal-Based
Agent:-
·
The knowledge of the current state environment is not always sufficient to
decide for an agent to what to do.
·
The agent needs to know its goal which describes
situations.
·
Goal-based agents expand the capabilities of the
Model-Based agent by having the “goal” information.
·
They choose an action, so that they can achieve the
goal.
·
For Example – A GPS navigation system plans the best
route to a destination.
4.
Utility-Based
Agent:-
·
These agents are similar to the goal-based agent but
provide an extra component of utility measurement which makes them different by
providing a measure of success at a given state.
·
Utility based agent act based on not only goals but
also the best way to achieve the goal.
·
The Utility-Based Agent is useful when there are
multiple possible alternatives, and an agent has to choose in order to perform
the best action.
·
For example – An investment advisor choose investment
strategies based on risk and return to maximize profit.
5.
Learning
Agent:-
·
A learning agent in AI is the type of agent which can
learn from its past experiences, or it has learning capabilities.
·
It starts to act with basic knowledge and then able to
act and adapt automatically through learning.
·
Learning agents are able to learn, analyze performance
and look for new ways to improve the performance.
·
For example – A recommendation system learns user
preference to provide better recommendation over time.
UNIT- 2
PROBLEM CHARACTERISTICS
Heuristics cannot be generalized,
as they are domain specific. Production systems provide ideal techniques for
representing such heuristics in the form of IF-THEN rules. Most problems requiring
simulation of intelligence use heuristic search extensively. Some heuristics
are used to define the control structure that guides the search process, as
seen in the example described above. But heuristics can also be encoded in the
rules to represent the domain knowledge. Since most AI problems make use of
knowledge and guided search through the knowledge, AI can be described as the
study of techniques for solving exponentially hard problems in polynomial time by
exploiting knowledge about problem domain.
To use the heuristic search for problem solving, we suggest analysis of
the problem for the following considerations:
·
Decomposability of the problem into a set
of independent smaller sub problems
·
Possibility of undoing solution steps, if
they are found to be unwise
·
Predictability of the problem universe
·
Possibility of obtaining an obvious
solution to a problem without comparison of all other possible
solutions
·
Type of the solution: whether it is a
state or a path to the goal state
·
Role of knowledge in problem solving
·
Nature of solution process: with or
without interacting with the user
The general classes of engineering
problems such as planning, classification, diagnosis, monitoring and design are
generally knowledge intensive and use a large amount of heuristics. Depending
on the type of problem, the knowledge representation schemes and control
strategies for search are to be adopted. Combining heuristics with the two
basic search strategies have been discussed above. There are a number of other
general-purpose search techniques which are essentially heuristics based. Their
efficiency primarily depends on how they exploit the domainspecific knowledge
to abolish undesirable paths. Such search methods are called ‘weak methods’,
since the progress of the search depends heavily on the way the domain
knowledge is exploited. A few of such search techniques which form the centre
of many AI systems are briefly presented in the following sections.
Problem Decomposition
This problem can be solved by
breaking it into smaller problems, each of which we can solve by using a small
collection of specific rules. Using this technique of problem decomposition, we
can solve very large problems very easily. This can be considered as an
intelligent behavior.
Can Solution Steps be Ignored?
Suppose we are trying to prove a
mathematical theorem: first we proceed considering that proving a lemma will be
useful. Later we realize that it is not at all useful. We start with another
one to prove the theorem. Here we simply ignore the first method. Consider the
8-puzzle problem to solve: we make a wrong move and realize that mistake. But
here, the control strategy must keep track of all the moves, so that we can
backtrack to the initial state and start with some new move. Consider the
problem of playing chess. Here, once we make a move we never recover from that
step. These problems are illustrated in the three important classes of problems
mentioned below:
1. Ignorable, in which solution
steps can be ignored. Eg: Theorem Proving
2. Recoverable, in which solution
steps can be undone. Eg: 8-Puzzle
3. Irrecoverable, in which solution
steps cannot be undone. Eg: Chess
Is the Problem Universe Predictable?
Consider the 8-Puzzle problem.
Every time we make a move, we know exactly what will happen. This means that it
is possible to plan an entire sequence of moves and be confident what the
resulting state will be. We can backtrack to earlier moves if they prove
unwise.
Suppose we want to play Bridge. We
need to plan before the first play, but we cannot play with certainty. So, the
outcome of this game is very uncertain. In case of 8-Puzzle, the outcome is
very certain. To solve uncertain outcome problems, we follow the process of
plan revision as the plan is carried out and the necessary feedback is provided.
The disadvantage is that the planning in this case is often very expensive.
Is Good Solution Absolute or Relative?
Consider the problem of answering
questions based on a database of simple facts such as the following:
1. Siva was a man.
2. Siva was a worker in a company.
3. Siva was born in 1905.
4. All men are mortal.
5. All workers in a factory died
when there was an accident in 1952.
6. No mortal lives longer than 100
years.
Suppose we ask a question: ‘Is Siva
alive?’
By representing these facts in a
formal language, such as predicate logic, and then using formal inference
methods we can derive an answer to this question easily. There are two ways to
answer the question shown below:
Method
I:
1. Siva was a man.
2. Siva was born in 1905.
3. All men are mortal.
4. Now it is 2008, so Siva’s age is
103 years.
5. No mortal lives longer than 100
years.
Method II:
1. Siva is a worker in the company.
2. All workers in the company died
in 1952.
Answer: So Siva is not alive. It is
the answer from the above methods.
We are interested to answer the
question; it does not matter which path we follow. If we follow one path
successfully to the correct answer, then there is no reason to go back and check
another path to lead the solution.
Production System
Production system or production rule system is a computer
program typically used to provide some form of artificial
intelligence, Which consists primarily of a set of rule about behavior but it
also includes the mechanism necessary to follow those rules as the system
responds to states of the world.
Components:-
The major components of production system in artificial
intelligence are:
I.
Global
Database:- The global database is the central data structure used by the
production in artificial intelligence.
II.
Set of
Production Rules:- The production rules operate on the global darabase.
Each rule usually has a precondition that is either satisfied or not by the global
database. If the precondition is satisfied, the rule is usually be applied. The
application of the rule changes the database.
III.
A Control
System:- The control system then chooses which applicable rule should
be applied and ceases computation when a termination condition on the database
is satisfied. If multiple rules are to fire at the same time, The control
system resolve the conflicts.
Production
System in Artificial Intelligence:
Example
Problem
Statement:-
We have two jugs of capacity
5L and 3L (Liter) and a tap with an endless supply of water. The objective is
to obtain 4 liter exactly in the 5-liter jug with the minimum steps possible.
Production
System :
Ø Fill the 5
liter jug from tap
Ø Empty the 5
liter jug
Ø Fill the 3
liter jug from tap
Ø Empty the 3
liter jug
Ø Then, empty
the 3 liter jug to liter
Ø Empty the 5
liter jug to 3 liter
Ø Pour water
from 3 liter to 5 liter
Ø Pour water
from 5 liter to 3 liter but do not empty
Solution :
1,84,6,1,8
or 3,5,3,7,2,5,3,5
Control/Search Strategies
How would you decide which rule to apply while searching for
a solution for any problem? There are certain requirements for a good control
strategy that you need to keep in mind, such as:
Ø The first
requirement for a good control strategy is that it should cause motion.
Ø The second
requirement for a good control strategy is that it should be systematic.
Ø Finally, it
must be efficient in order to find a good answer.
1.
Search
Algorithm
Artificial intelligence is the study of building agents that
act rationally. Most of the time, these agents perform some kind of search
algorithm in the background in order to achieve their tasks.
A Search
Problem Consists of:
·
A state
space- Set of all possible
states where you can be.
·
A Start
state – These state from where the search begins.
·
A Goal
State – A function that looks at the current state returns whether or
not it is the goal state.
·
The solution to a search problem is a sequence of
actions, called the plan that transform the start state to the goal state.
·
This plan is achieved through search algorithm.
·
Based on the search problems we can the search
algorithms into Uninformed Search (Blind
Search) and informed Search
(Heuristic Search) algorithms.
·
The five
components in a Search Algorithm are:-
i)
State:- State in
the state space to which the node corresponds.
ii)
Parent-Node
– The node in the search tree that generate this node.
iii) Action:- The action that was applied to the
parent to generate the node.
iv) Path-Cost:- The cost of the path from the
initial state to the node.
v)
Depth:- The number
of steps along the path from the initial state.
2.
Evaluation
Search Trees are evaluated based on –
·
Completeness-
Does it always find a solution if one exits.
·
Time
complexity- Number of
nodes generated.
·
Space
complexity- Maximum
number of nodes in memory.
·
Optimality- Does it always find a least cost solution.
·
Systematicity- Does it visit each state at most once.
3.
Uninformed
Search
·
Uninformed search in AI refers to a type of search
algorithm that does not use additional information to guide the search process.
·
Instead, These algorithms explore the search space in
a systematic, but blind, manner without considering the cost of reaching the
goal or the likelihood of finding a solution.
·
Example of uninformed search algorithms include –
¨
Depth First Search
¨
Breadth First Search
¨
Uniform Cost Search
1.
Depth First
Search – Depth First Search (DFS) is an algorithm for traversing or
searching tree or graph data structures. The algorithm starts at the root node
and goes through the branch nodes and then returns. It is implemented using a
stack data structure that works on the concepts of last in first out (LIFO).
m = max.
depth of the search tree = the number of levels of the search tree.
b =
branching factor (average no. of successors/children each node has)
Time complexity – Equivalent to the number of
nodes traversed in DFS.
0(bm)
Space complexity – Equivalent to how large can
the fringe get.
0(b
x m)
Completeness – DFS is complete if the search tree
is finite, meaning for a given finite search tree, DFS will come up with a
solution if it exists.
Optimality- DFS is not optimal, meaning the number of
steps in reaching the solution, or the cost spent in reaching it is high.
2. Breadth
First Search - Breadth first search is an algorithm for traversing or
searching tree or graph data structure. It starts at the tree root and explore
all of the neighbor nodes at the present depth prior to moving on the nodes at
the next depth level. It uses the queue data structure that works on the first
in first out (FIFO) concept. It is a complete algorithm as it returns a
solution if a solution exists.
d = the depth of the shallowest solution.
b = branching factor (average no. of successors/children each
node has)
Time
Complexity :- Equivalent
to the number of nodes traversed in BFS until the shallowest solution.
0(bd)
Space
Complexity :- Equivalent to how large can the fringe get.
0(bd)
Completeness
:- BFS is complete,
meaning for a given search tree, BFS will come up with a solution if it exists.
Optimality
:- BFS is optimal as long as
the costs of all edges are equal.
Hill climbing search algorithm is
simply a loop that continuously moves in the direction of increasing value. It
stops when it reaches a “peak” where no neighbour has higher value. This
algorithm is considered to be one of the simplest procedures for implementing
heuristic search. The hill climbing comes from that idea if you are trying to
find the top of the hill and you go up direction from where ever you are. This
heuristic combines the advantages of both depth first and breadth first
searches into a single method.
The name hill climbing is derived
from simulating the situation of a person climbing the hill. The person will
try to move forward in the direction of at the top of the hill. His movement
stops when it reaches at the peak of hill and no peak has higher value of
heuristic function than this. Hill climbing uses knowledge about the local
terrain, providing a very useful and effective heuristic for eliminating much
of the unproductive search space. It is a branch by a local evaluation
function.
The hill climbing is a variant of
generate and test in which direction the search should proceed. At each point
in the search path, a successor node that appears to reach for exploration.
Algorithm
Step 1
Evaluate the starting state. If it
is a goal state then stop and return success.
Step 2
Else, continue with the starting
state as considering it as a current state. Continue step-4 until a solution is
found i.e. until there are no new states left to be applied in the current
state.
Step 4
a) Select a state that has not been
yet applied to the current state and apply it to produce a new state.
b) Procedure to evaluate a new
state.
i) If the
current state is a goal state, then stop and return success.
ii) If it is better than the
current state, then make it current state and proceed further.
iii) If it
is not better than the current state, then continue in the loop until a
solution is found
Step 5: Exit.
State the advantages and disadvantages of Hill Climbing.
Ans : Advantages
·
Hill climbing technique is useful in job shopØ scheduling, automatic
programming, designing, and vehicle routing and portfolio management.
·
It is also helpful to solve pure optimizationØ problems where the
objective is to find the best according to the objective function.
·
It requires much less conditions than other searchØ techniques.
Disadvantages
·
The question that remains on hill climbing search is
whether this hill is the highest hill possible.
·
Unfortunately without further extensive exploration,
this question cannot be answered.
·
This technique works but as it uses local information
that’s why it can be fooled.
·
The algorithm doesn’t maintain a search tree, so the
current node data structure need only record the state.
·
and its objective function value. It assumes that local improvement will lead
to global improvement.
No comments:
Post a Comment