Trending Technology Machine Learning, Artificial Intelligent, Block Chain, IoT, DevOps, Data Science

Recent Post

Codecademy Code Foundations

Search This Blog

Goal and Search of Artificial Intelligence

We will consider a goal to be a set of world states - just those states in which the goal is satisfied.

Actions can be viewed as causing transitions between world states.

Looking for Parking

Going home; need to find street parking
Formulate Goal:
   - Car is parked
Formulated Problem:
  - States : street with parking and car at that street
  - Actions: drive between street segments
Find solution:
  - Sequence of street segments, ending with a street with parking

Search Example

Formulate goal: Be in Bucharest.

Formulate problem: states are cities, operators drive between pairs of cities.

Find solution: Find a sequence of cities (e.g., Arad. Siblu, Fagaras, Bucharest) that leads from the current state to a state meeting the goal condition.

Problem Formulation :-
A search problem is defined by the

1. Initial state (e.g., Arad)
2. Operators (e.g., Arad -> Zerind, Arad -> Sibiu, etc.)
3. Goal test (e.g., at Bucharest)
4. Solution cost (e.g., path cost)

Example (2) Vacuum World :-




Example Problem :

States: tile locations

Initial state: one specific tile configuration

Operators: move blank tile left, right, up, or down

Goal: tiles are numbered from one to eight around the square

Path cost: cost of 1 per move (solution cost same as number of most or path length)

Single-State problem and Multiple-States problem:-

World is accessible -> agent's sensors give enough information about which state it is in (so, it knows what each of its action does), then it calculate exactly which state it will be after any sequence of actions. Single-State Problem.

world is inaccessible -> agent has limited access to the world state, so it may have no sensors at all. It knows only that initial state is one of the set {1,2,3,4,5,6.7,8} Multiple-States Problem.

Think of the graph defined as follows:-

- the nodes denotes descriptions of a state of the world, e.r., which blocks are on top of what in a scene, and where the links represents actions that change from one state to the other.
 - A path through such a graph (from a start node to a goal node) is a "plan of action" to achieve some desired goal state from some known starting state. It is this type of graph that is of more general interest in AI. 


Searching for Solutions :-
(Visualize Search Space as a Tree)

- States are nodes
- Actions are edges
- Initial state is root
- Solution is path from root to goal node
- Edges sometimes have associated costs
- States resulting from operator are children



Directed  Graphs :-

A graph is also a set of nodes connected by links but where loops are allowed and a node can have multiple parents.

We have two kinds of graphs to deal with directed graph, where the links have direction (one-way streets).
Undirected Graphs :-

undirected graphs where the links go both ways. You can think of an undirected graph as shorthand for a graph with directed links going each way between connected nodes.


Searching for solutions (Graphs and trees) :-

The map of all paths within a state-space is a graph of nodes which are connected by links.

Now if we trace out all possible paths through the graph, and terminate paths before they return to nodes already visited on that path, we produce a search tree.

Like graphs, trees have nodes, but they are linked by branches.

The start node is called the root and nodes at the other ends are leaves.

Nodes have generations of descendents.

The aim of search is not to produce complete physical trees in memory, but rather explore as little of the virtual tree looking for root-goal paths.


 

No comments:

Post a Comment

Popular Articles