ProZone : Articles : Robotics and AI


Robotics and Artificial Intelligence

Compiled by : Abhijith HN


 

CONTENTS

I.   Introduction    

II.  Robotics

1. Definition

2. History of robotics

3. The mechanics of robots

i) Parts

ii) Mobility

iii) Control systems

4. Classification of robots

5. Applications of robots

6. The future of Robotics

III.  Artificial Intelligence

1. Definition

2. History of AI

3. Approaches to AI

4. Applications of AI

5. The future of AI

IV.  Summary

V. References.

 


INTRODUCTION

            Robotics and artificial intelligence are at the frontiers of technology. These relatively new branches of science have developed at a breath-taking rate. Less than a century ago the word "robot" was unheard of and the idea of "thinking machines" was considered as fanciful even in science fiction. Today, the achievements in these fields stand testimony to the power of science.

            Robotics aims at the development and use of machines capable of independent and autonomous action. Robotic machines increasingly resemble humans and have the potential to perform several critical tasks as well as, and in some cases, better than humans. It has always been mankind's dream to play the role of a Creator and in no other field has this dream come closer to realization. Robots are being used in industries, laboratories, hospitals and homes. With the exponential growth in this exciting field, there is no doubt that we are poised on the brink of a robo-revolution.

            Artificial intelligence (AI) is considered by many as the ultimate challenge for technology. The neurobiological basis of intelligence has for long been a mystery. Exciting progress has been made in unraveling this mystery in last few years and this has of course provided an impetus to the nascent field of AI. In fact, the computational approach to AI has provided a useful framework for the study of the functioning of the brain. The field of AI is proving to be a unique platform where technology and neurobiology have come together in a mutually beneficial manner. The development of neural networks and parallel processing has tremendously increased computational power, which can find widespread use. AI has become an integral part of robots and lies at the heart of their "thinking" capacity. Apart from robotics, AI finds use in tasks which require vast computational resources such as weather forecasting, speech recognition and language protocols and space technology to mention only a few. Even something seemingly trivial like a game of chess has been an area of application of AI. In fact chess is a prototype problem requiring intelligent analysis and has been the litmus test for many AI enabled software.

 

 

 


 

 

Robotics 

 

 


 

DEFINITION OF ROBOTICS AND ROBOTS

            Robotics is the field of science concerned with robots. A robot may be defined as a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.

            The Webster's dictionary describes a robot as an automatic device that performs functions normally ascribed to humans or a machine in the form of a human.

HISTORY OF ROBOTS

            The word 'robot' was coined by the Czech playwright Karel Capek (pronounced "chop'ek") from the Czech word for forced labor or serf. The use of the word Robot was introduced into his play R.U.R. (Rossum's Universal Robots) which opened in Prague in January 1921. The play was an enormous success and productions soon opened throughout Europe and the US. R.U.R's theme, in part, was the dehumanization of man in a technological civilization. There is some evidence that the word robot was actually coined by Karl's brother Josef, a writer in his own right. In a short letter, Capek writes that he asked Josef what he should call the artifical workers in his new play. Karel suggests Labori, which he thinks too 'bookish' and his brother mutters "then call them Robots" and turns back to his work, and so from a curt response we have the word robot.

            The term 'robotics' was coined and first used by the Russian-born American scientist and writer Isaac Asimov (born Jan. 2, 1920, died Apr. 6, 1992). Asimov wrote prodigiously on a wide variety of subjects. He was best known for his many works of science fiction. The most famous include I Robot (1950), The Foundation Trilogy (1951-52), Foundation's Edge (1982), and The Gods Themselves (1972), which won both the Hugo and Nebula awards. The word 'robotics' was first used in "Turnaround", a short story published in 1942. "I, Robot", a collection of several of these stories, was published in 1950. Asimov also proposed his three "Laws of Robotics", and he later added a 'zeroth law'.

 

 

 

 

Law Zero:

A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

Law One:

A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law.

Law Two:

A robot must obey orders given it by human beings, except where such orders would conflict with a higher order law.

Law Three:

A robot must protect its own existence as long as such protection does not conflict with a higher order law.

These "Laws of robotics" though of fictional origin, their incorporation remains the ultimate goal of those involved in the development of intelligent, autonomous robots.

Possibly, the earliest ancestor of today's industrial robot devices is the Clepsydra, or water clock, which improved upon the hourglass by employing a siphon principle to automatically recycle itself. Ctesibius of Alexandria, a reputed physicist and inventor of ancient Greece is said to have constructed one such clock at around 250 BC. Weight driven pendulum clocks were used in Europe in the middle ages. The spring driven clock was invented in the 18th Century, which also witnessed the introduction of rudimentary forms of automatic machinery in the textile industry.

The Industrial Revolution stimulated the invention of elementary robot mechanisms to perfect the production of power itself. The later 19th and early 20th centuries saw a rapid proliferation of powerful machinery in industrial operations. These at first required a person to position both the work and the machine, and later only the work. Automatic cycle repeating machines (automatic washers), self-measuring and adjusting machines (textile colour-blending equipment), and machines with a degree of self-programming (automatic elevators) soon followed.

            The first industrial modern robots were the Unimates developed by George Devol and Joe Engelberger in the late 50's and early 60's. The first patents were by Devol for parts transfer machines. Engelberger formed Unimation and was the first to market robots. As a result, Engelberger has been called the "father of robotics". Modern industrial arms have increased in capability and performance through controller and language development, improved mechanisms, sensing, and drive systems. In the early to mid 80's the robot industry grew very at a rapid pace primarily due to large investments by the automotive industry. The quick leap into the factory of the future turned into a plunge when the integration and economic viability of these efforts proved disastrous. The robot industry has only recently recovered to mid-80's revenue levels. In the meantime there has been an enormous shakeout in the robot industry. In the US, for example, only one US company, Adept, remains in the production industrial robot arm business. Most of the rest went under, consolidated, or were sold to European and Japanese companies.

In the research community the first automata were probably Grey Walter's machina (1940's) and the John's Hopkins Beast. Teleoperated or remote controlled devices had been built even earlier with at least the first radio controlled vehicles built by Nikola Tesla in the 1890's. Tesla is better known as the inventor of the induction motor, AC power transmission, and numerous other electrical devices. Tesla had also envisioned smart mechanisms that were as capable as humans. SRI's Shakey navigated highly structured indoor environments in the late 60's and Moravec's Stanford Cart was the first to attempt natural outdoor scenes in the late 70's. From that time there has been a proliferation of work in autonomous driving machines that cruise at highway speeds and navigate outdoor terrains in commercial applications.

THE MECHANICS OF ROBOTS

Parts:

A typical robot has a movable physical structure constituting the body, an actuator of some sort which provides for movement of body parts and locomotion, power source and a control system that controls all these elements.

 

Body:

The body houses the other parts of the robot. It is usually made of several articulated segments which allows for mobility. Robots involved in handling objects have sophisticated manipulators that attempt to emulate the amazing dexterity of the human han. Mobile robots have a locomotor apparatus which may be in the form of wheels, traction belts or articulated limbs. Body parts are usually made of metals, plastic or composite materials.

Movement of body parts: Robots spin wheels and move jointed segments with some sort of actuator. Some robots use electric motors and solenoids as actuators; some use a hydraulic system; and some use a pneumatic system (a system driven by compressed gases). Robots may use all these actuator types. The actuators are all wired to an electrical circuit. The circuit powers electrical motors and solenoids directly, and it activates the hydraulic system by manipulating electrical valves. The valves determine the pressurized fluid's path through the machine. To move a hydraulic leg, for example, the robot's controller would open the valve leading from the fluid pump to a piston cylinder attached to that leg. The pressurized fluid would extend the piston, swiveling the leg forward. Typically, in order to move their segments in two directions, robots use pistons that can push both ways.

Robotic arms and manipulators: The arms and manipulators of robots allow it to perform graded, skilled, maneuvering and handling of objects. A typical robotic arm is made up of seven metal segments, joined by six joints. The computer controls the robot by rotating individual step motors connected to each joint (some larger arms use hydraulics or pneumatics). Unlike ordinary motors, step motors move in exact increments (check out this site to find out how). This allows the computer to move the arm very precisely, repeating exactly the same movement over and over again. The robot uses motion sensors to make sure it moves just the right amount. An industrial robot with six joints closely resembles a human arm. It has the equivalent of a shoulder, an elbow and a wrist. Typically, the shoulder is mounted to a stationary base structure rather than to a movable body. This type of robot has six degrees of freedom, meaning it can pivot in six different ways. A human arm, by comparison, has seven degrees of freedom.

 

   

Fig. 1: Mechanical links and joints used in robotic arms and actuators.

 

The mechanical manipulator of an industrial robot is made up of a sequence of link and joint combinations. The links are the rigid members connecting the joints. The joints (also called axes) are the movable components of the robot that cause relative motion between adjacent links. As shown in Fig. 1, there are five principal types of mechanical joints used to construct the manipulator.

Locomotion: Mobile robots achieve locomotion by means of wheels, traction belts, mechanical limbs or some combination of these. Wheels provide a mechanistically simplistic solution to the problem of locomotion. Wheeled locmotion is fast, reliable and can have considerable maneuverability. But wheeled robots have difficulty in surmounting obstacles. Wheeled locomotion is therefore well suited for indoor or simple outdoor scenarios with relatively smooth and predictable contours. Traction belts have the advantage of being able to overcome most obstacles but have limited speed and maneuverability. They are well suited for outdoor applications in rough terrains. Mechanical legs provide a great degree of maneuverability and flexibility but at present have limited speed. Stability is also a problem in robots based on legged platforms. Legged robots require sophisticated mechanical parts and control systems. Legged robots are suitable for all types of terrain especially in situations requiring greater maneuverability.

Power source: Most modern robots use electrical power. Most robots either have a battery or they plug into the wall. Hydraulic robots also need a pump to pressurize the hydraulic fluid, and pneumatic robots need an air compressor or compressed air tanks.

Control system: The "brain" is a very critical component of robots and the use of microchip based computers in control systems has revolutionized robotics. Introduction of neural networks and AI will enable the development of increasingly human like autonomous robots. The robot's computer controls everything attached to the circuit. To move the robot, the computer switches on all the necessary motors and valves.

Control systems may be of three basic types - 1) Preprogrammed 2) Teleoperated 3) Autonomous. Preprogrammaed control systems use preloaded data to function. Data may be coded on punched cards or other media. Such robots have limited versatility. Most such robots are reprogrammable - to change the robot's behavior, one has to simply load a new program to its computer. Teleoperated control systems make use of transmitter-receiver systems to provide interactive continuous control. These are more versatile but are dependent on skilled operators and are limited by the range of the transmitters. Autonomous robots have sensory systems which provide continuous feedback to a sophisticated computer-brain which is capable of taking appropriate decisions. The most common robotic sense is the sense of movement -- the robot's ability to monitor its own motion. A standard design uses slotted wheels attached to the robot's joints. An LED on one side of the wheel shines a beam of light through the slots to a light sensor on the other side of the wheel. When the robot moves a particular joint, the slotted wheel turns. The slots break the light beam as the wheel spins. The light sensor reads the pattern of the flashing light and transmits the data to the computer. The computer can tell exactly how far the joint has swiveled based on this pattern. This is the same basic system used in computer mice. Some robots have more advanced control systems which enables autonomic mobility. Autonomous robots can act on their own, independent of any controller. The basic idea is to program the robot to respond a certain way to outside stimuli. The very simple bump-and-go robot is a good illustration of how this works. This sort of robot has a bumper sensor to detect obstacles. When you turn the robot on, it zips along in a straight line. When it finally hits an obstacle, the impact pushes in its bumper sensor. The robot's programming tells it to back up, turn to the right and move forward again, in response to every bump. In this way, the robot changes direction any time it encounters an obstacle. Advanced robots use more elaborate versions of this same idea. Roboticists create new programs and sensor systems to make robots smarter and more perceptive. Today, robots can effectively navigate a variety of environments. Simpler mobile robots use infrared or ultrasound sensors to see obstacles. These sensors work the same way as animal echolocation: The robot sends out a sound signal or a beam of infrared light and detects the signal's reflection. The robot locates the distance to obstacles based on how long it takes the signal to bounce back. More advanced robots use stereo vision to see the world around them. Two cameras give these robots depth perception, and image-recognition software gives them the ability to locate and classify various objects. Robots might also use microphones and smell sensors to analyze the world around them. Some autonomous robots can only work in a familiar, constrained environment. Lawn-mowing robots, for example, depend on buried border markers to define the limits of their yard. An office-cleaning robot might need a map of the building in order to maneuver from point to point. More advanced robots can analyze and adapt to unfamiliar environments, even to areas with rough terrain. These robots may associate certain terrain patterns with certain actions. A rover robot, for example, might construct a map of the land in front of it based on its visual sensors. If the map shows a very bumpy terrain pattern, the robot knows to travel another way. This sort of system is very useful for exploratory robots that operate on other planets (check out this page to learn more). An alternative robot design takes a less structured approach - randomness. When this type of robot gets stuck, it moves its appendages every which way until something works. Force sensors work very closely with the actuators, instead of the computer directing everything based on a program. This is something like an ant trying to get over an obstacle - it does not seem to make a decision when iit needs to get over an obstacle, it just keeps trying things until it gets over it.

 

  

CLASSIFICATION OF ROBOTS

            Robots may be classified in different ways.

            Based on mobility:

§         Sessile robots

§         Mobile robots

Based on the environment in which they are used

§         Terrestrial

§         Arial

§         Aquatic

Based on applications

§         Industrial

§         Domestic

§         Research

§         Military

Based on type of control system

§         Preprogrammed

§         Teleoperated

§         Autonomous

APPLICATIONS OF ROBOTS

            Robots are being increasingly used in various fields. Robots have several advantages, which make them well suited for some applications.

§         Robots can function under a wide range of environmental conditions. Their use is invaluable in adverse conditions inhospitable to humans. Eg. handling of radioactive and other toxic materials.

§         Robots can perform repetitive tasks over long periods of time without fatigue.

§         Robots can  perform tasks requiring high accuracy and precision.

§         Robotic labor is economical in the long run.

 

Robots in industry: One of the most important application areas for robotics and automation technology is manufacturing. To many people, automation means manufacturing automation. Three types of automation in production can be distinguished: (1) fixed automation, (2) programmable automation, and (3) flexible automation. Fixed automation, also known as "hard automation," refers to an automated production facility in which the sequence of processing operations is fixed by the equipment configuration. In effect, the programmed commands are contained in the machines in the form of cams, gears, wiring, and other hardware that is not easily changed over from one product style to another. This form of automation is characterized by high initial investment and high production rates. It is therefore suitable for products that are made in large volumes. Examples of fixed automation include machining transfer lines found in the automotive industry, automatic assembly machines, and certain chemical processes. Programmable automation is a form of automation for producing products in batches. The products are made in batch quantities ranging from several dozen to several thousand units at a time. For each new batch, the production equipment must be reprogrammed and changed over to accommodate the new product style. This reprogramming and changeover take time to accomplish, and there is a period of nonproductive time followed by a production run for each new batch. Production rates in programmable automation are generally lower than in fixed automation, because the equipment is designed to facilitate product changeover rather than for product specialization. A numerical-control machine tool is a good example of programmable automation. The program is coded in computer memory for each different product style, and the machine tool is controlled by the computer program. Industrial robots are another example. Flexible automation is an extension of programmable automation. The disadvantage with programmable automation is the time required to reprogram and change over the production equipment for each batch of new product. This is lost production time, which is expensive. In flexible automation, the variety of products is sufficiently limited so that the changeover of the equipment can be done very quickly and automatically. The reprogramming of the equipment in flexible automation is done off-line; that is, the programming is accomplished at a computer terminal without using the production equipment itself. Accordingly, there is no need to group identical products into batches; instead, a mixture of different products can be produced one right after another.

Robots in health care: The first generation of surgical robots are already being installed in a number of operating rooms around the world. These aren't true autonomous robots that can perform surgical tasks on their own, but they are lending a mechanical helping hand to surgeons. These machines still require a human surgeon to operate them and input instructions. Remote control and voice activation are the methods by which these surgical robots are controlled. Robotics are being introduced to medicine because they allow for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, these machines have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. According to one manufacturer, robotic devices could be used in more than 3.5 million medical procedures per year in the United States alone.

Robotic surgery has certain advantages. The ability to operate on a patient long-distance could lower the cost of health care. In addition to cost efficiency, robotic surgery has several other advantages over conventional surgery, including enhanced precision and reduced trauma to the patient. For instance, heart bypass surgery now requires that the patient's chest be "cracked" open by way of a 1-foot (30.48-cm) long incision. However, with robotic systems, it is possible to operate on the heart by making three small incisions in the chest, each only about 1 centimeter in diameter. Because the surgeon would make these smaller incisions instead of one long one down the length of the chest, the patient would experience less pain and less bleeding, which means a faster recovery. Robotics also decrease the fatigue that doctors experience during surgeries that can last several hours. Surgeons can become exhausted during those long surgeries, and can experience hand tremors as a result. Even the steadiest of human hands cannot match those of a surgical robot. Robotic systems can be programmed to compensate for tremors, so if the doctor's hand shakes the computer ignores it and keeps the mechanical arm steady.

While surgical robots offer some advantages over the human hand, we are still a long way from the day when autonomous robots will operate on people without human interaction. But, with advances in computer power and artificial intelligence, it could be that in this century a robot will be designed that can locate abnormalities in the human body, analyze them and operate to correct those abnormalities without any human guidance.

Robots are also used in the pharmaceutical industries.

Robots at home: Already many homes have robot like machines to make our lives easier. The simplest of these robots are washing machines, vacuum cleaners etc. However in the recent years more advanced robots capable of performing complex tasks have been developed. These Robots resemble humans and are called Humanoids.

Robots in the armed forces: Robots have been used by the armed forces of several nations to perform various tasks. Autonomous or teleoperated machines have been used as reconnaissance scouts. Unmanned Arial vehicles have also been used by air forces in training. Autonomous robots that can independently search and destroy enemy targets are being developed and could revolutionize the way future battles are fought.

Robots in space: One of the important applications of Robotics is in the field of Space Technology. Robots are capable of withstanding the harsh conditions in of outer-space . A lot of research and development is being carried out by NASA. Robotic space probes, unmanned landing and exploration vehicles have been successfully used.

Robots in entertainment: Robots are proving to be the ultimate high-tech toys. Robotic pets which interact with their masters are gaining popularity. Robotic games and festivals are being held where amateur robotocists meet and field their robots against each other. Robots have also been used to provide special effects in many popular film productions.

Robots in special situations: Robots have been used in disposal of radioactive wastes

THE FUTURE OF ROBOTICS

Just as computers revolutionized the latter half of the 20th century, the field of robotics has the potential to equally alter how we live in the 21st century. We've already seen how robots have changed the manufacturing of cars and other consumer goods by streamlining and speeding up the assembly line. We even have robotic lawn mowers and robotic pets. And robots have enabled us to see places that humans are not yet able to visit, such as other planets and the depths of the ocean. In the coming decades, we will see robots that have artificial intelligence, coming to resemble the humans that create them. The aim will be to eventually create self-aware and conscious that are able to do anything that a human can.

The industrial use of robots is expected to grow with the introduction of automated manufacturing processes for more and more products. The use of a computer console to perform operations from a distance opens up the idea of tele-surgery, which would involve a doctor performing delicate surgery miles away from the patient. If the doctor doesn't have to stand over the patient to perform the surgery, and can remotely control the robotic arms at a computer station a few feet from the patient, the next step would be performing surgery from locations that are even farther away. If it were possible to use the computer console to move the robotic arms in real-time, then it would be possible for a doctor in California to operate on a patient in New York. A major obstacle in tele-surgery has been the time delay between the doctor moving his or her hands to the robotic arms responding to those movements. With advances in computer power and artificial intelligence, it could be that in this century a robot will be designed that can locate abnormalities in the human body, analyze them and operate to correct those abnormalities without any human guidance.

With humankind looking beyond its home planet, robots are expected to lead the way in planetary and stellar exploration.

            Another major area of interest will be in designing and creating robots capable of self-assembly. The concept of autonomous, thinking robots with self-awareness that are capable of replication by self-assembly is bound to raise questions and concerns and could lead to redefinition of life as we now know it to be.

 

 

"It is with horror, frankly, that he rejects all responsibility for the idea that metal contraptions could ever replace human beings, and that by means of wires they could awaken something like life, love, or rebellion. He would deem this dark prospect to be either an overestimation of machines, or a grave offence against life."

The Author of Robots Defends Himself (In third person) - Karl Capek (the playwright who introduced the word "robot" in his play R.U.R), June 9, 1935, translation: Bean Comrada

 

            "And that is all. I saw I from the beginning, when the poor robots couldn’t speak, to the end, when they stand between mankind and destruction. I will see no more. My life is over. You will see what comes next"

            Dr. Calvin, a character in "The evitable Conflict", a short story from the collection I, Robot by Isaac Asimov, 1950.

 


 

Artificial Intelligence 

 


Definition: Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

 

The capacity of a digital computer or computer controlled robot or device to perform tasks commonly associated with the higher intellectual processes characteristic of humans, such as the ability to reason, discover meaning generalise or learn from past experience can be termed artificial intelligence. The term is also frequently applied to that branch of computer science concerned with the development of systems endowed with such capabilities.

 

AI (artificial intelligence) is the art of making machines appear to be able to 'think'. There are, at the very highest level, two sorts of AI: sentient AI's and non-sentient AIs. Sentient means 'aware of your own existence' and is generally regarded to be a good sign of intelligence. All humans are sentient, and the jury is still out on whether animals are, but machines are definitely not.

 

 

History of Artificial Intelligence: Although the computer provided the technology necessary for AI, it was not until the early 1950's that the link between human intelligence and machines was really observed. Norbert Wiener was one of the first Americans to make observations on the principle of feedback theory feedback theory. The most familiar example of feedback theory is the thermostat: It controls the temperature of an environment by gathering the actual temperature of the house, comparing it to the desired temperature, and responding by turning the heat up or down. What was so important about his research into feedback loops was that Wiener theorised that all intelligent behaviour was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. This discovery influenced much of early development of AI.

In late 1955, Newell and Simon developed The Logic Theorist, considered by many to be the first AI program. The program, representing each problem as a tree model, would attempt to solve it by selecting the branch that would most likely result in the correct conclusion. The impact that the logic theorist made on both the public and the field of AI has made it a crucial stepping stone in developing the AI field.

In 1956 John McCarthy regarded as the father of AI, organised a conference to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. He invited them to Vermont for "The Dartmouth summer research project on artificial intelligence." From that point on, because of McCarthy, the field would be known as Artificial intelligence. Although not a huge success, (explain) the Dartmouth conference did bring together the founders in AI, and served to lay the groundwork for the future of AI research.

 

In the seven years after the conference, AI began to pick up momentum. Although the field was still undefined, ideas formed at the conference were re-examined, and built upon. Centers for AI research began forming at Carnegie Mellon and MIT, and a new challenges were faced: further research was placed upon creating systems that could efficiently solve problems, by limiting the search, such as the Logic Theorist. And second, making systems that could learn by themselves.

In 1957, the first version of a new program The General Problem Solver(GPS) was tested. The program developed by the same pair which developed the Logic Theorist. The GPS was an extension of Wiener's feedback principle, and was capable of solving a greater extent of common sense problems. A couple of years after the GPS, IBM contracted a team to research artificial intelligence. Herbert Gelerneter spent 3 years working on a program for solving geometry theorems.

 

While more programs were being produced, McCarthy was busy developing a major breakthrough in AI history. In 1958 McCarthy announced his new development; the LISP language, which is still used today. LISP stands for LISt Processing, and was soon adopted as the language of choice among most AI developers.

 

Approaches to AI: In the quest to create intelligent machines, the field of Artificial Intelligence has split into several different approaches based on the opinions about the most promising methods and theories. These rivaling theories have lead researchers in one of two basic approaches; bottom-up and top-down. Bottom-up theorists believe the best way to achieve artificial intelligence is to build electronic replicas of the human brain's complex network of neurons, while the top-down approach attempts to mimic the brain's behavior with computer programs.

Neural Networks and Parallel Computation:

The human brain is made up of a web of billions of cells called neurons, and understanding its complexities is seen as one of the last frontiers in scientific research. It is the aim of AI researchers who prefer this bottom-up approach to construct electronic circuits that act as neurons do in the human brain. Although much of the working of the brain remains unknown, the complex network of neurons is what gives humans intelligent characteristics. By itself, a neuron is not intelligent, but when grouped together, neurons are able to pass electrical signals through networks.

Warren McCulloch after completing medical school at Yale, along with Walter Pitts a mathematician proposed a hypothesis to explain the fundamentals of how neural networks made the brain work. Based on experiments with neurons, McCulloch and Pitts showed that neurons might be considered devices for processing binary numbers. An important back of mathematic logic, binary numbers (represented as 1's and 0's or true and false) were also the basis of the electronic computer. This link is the basis of computer-simulated neural networks, also know as Parallel computing.

 

A century earlier the true / false nature of binary numbers was theorized in 1854 by George Boole in his postulates concerning the Laws of Thought. Boole's principles make up what is known as Boolean algebra, the collection of logic concerning AND, OR, NOT operands. For example according to the Laws of thought the statement: (for this example consider all apples red)

 

Apples are red-- is True

Apples are red AND oranges are purple-- is False

Apples are red OR oranges are purple-- is True

Apples are red AND oranges are NOT purple-- is also True

Boole also assumed that the human mind works according to these laws, it performs logical operations that could be reasoned. Ninety years later, Claude Shannon applied Boole's principles in circuits, the blueprint for electronic computers. Boole's contribution to the future of computing and Artificial Intelligence was immeasurable, and his logic is the basis of neural networks.

McCulloch and Pitts, using Boole's principles, wrote a paper on neural network theory. The thesis dealt with how the networks of connected neurons could perform logical operations. It also stated that, one the level of a single neuron, the release or failure to release an impulse was the basis by which the brain makes true / false decisions. Using the idea of feedback theory, they described the loop which existed between the senses ---> brain ---> muscles, and likewise concluded that Memory could be defined as the signals in a closed loop of neurons. Although we now know that logic in the brain occurs at a level higher then McCulloch and Pitts theorised, their contributions were important to AI because they showed how the firing of signals between connected neurons could cause the brains to make decisions. McCulloch and Pitt's theory is the basis of the artificial neural network theory.

 

Using this theory, McCulloch and Pitts then designed electronic replicas of neural networks, to show how electronic networks could generate logical processes. They also stated that neural networks may, in the future, be able to learn, and recognize patterns.

Top Down Approaches; Expert Systems:

Because of the large storage capacity of computers, expert systems had the potential to interpret statistics, in order to formulate rules. An expert system works much like a detective solves a mystery. Using the information, and logic or rules, an expert system can solve the problem.

Chess:

AI-based game playing programs combine intelligence with entertainment. On game with strong AI ties is chess. World-champion chess playing programs can see ahead twenty plus moves in advance for each move they make. In addition, the programs have an ability to get progressably better over time because of the ability to learn. Chess programs do not play chess as humans do. In three minutes, Deep Thought (a master program) considers 126 million moves, while human chessmaster on average considers less than 2 moves. Herbert Simon suggested that human chess masters are familiar with favourable board positions, and the relationship with thousands of pieces in small areas. Computers on the other hand, do not take hunches into account. The next move comes from exhaustive searches into all moves, and the consequences of the moves based on prior learning. Chess programs, running on Cray super computers have attained a rating of 2600 (senior master), in the range of Gary Kasparov, the Russian world champion.

Frames

One method that many programs use to represent knowledge are frames. Pioneered by Marvin Minsky, frame theory revolves around packets of information

 

Applications of AI

 

Game playing

You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation--looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.

Speech recognition

In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus, United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.

Understanding natural language

Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.

Computer vision

The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two-dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present, there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.

Expert systems

A ``knowledge engineer'' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense.

Heuristic classification

One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).

Other applications

The military put AI based hardware to the test of war during Desert Storm. AI-based technologies were used in missile systems, heads-up-displays, and other advancements. AI has also made the transition to the home. With the popularity of the AI computer growing, the interest of the public has also grown. Applications for the Apple Macintosh and IBM compatible computer, such as voice and character recognition have become available. Also AI technology has made steadying camcorders simple using fuzzy logic.

The future of AI

AI has always been on the pioneering end of computer science. Advanced-level computer languages, as well as computer interfaces and word-processors owe their existence to the research into artificial intelligence. The theory and insights brought about by AI research will set the trend in the future of computing. The products available today are only bits and pieces of what are soon to follow, but they are a movement towards the future of artificial intelligence. The advancements in the quest for artificial intelligence have, and will continue to affect our jobs, our education, and our lives.

Human intelligence includes things like desires, enjoyment, suffering, and various forms of consciousness, all of which play an important role in their information processing, but which we hardly understand as yet.

Many AI researchers are trying to find ways of extending the concepts, theories, mechanisms and models in AI to include all those things. Their work includes trying to find ways of programming computers so that they have the kinds of richness and flexibility required for animal abilities. The design of artificial neural nets, flexible rule interpreters, and various kinds of self-organising software systems are among the approaches being followed.

Research in ways of building AI systems with the sorts of mechanisms involved in motivation, moods and emotions, as well as the more obviously required capabilities like perception, reasoning, problem solving and motor control are being carried out.

  

Summary

The project focuses on various aspects of robotics and Artificial Intelligence, which are closely related to each other. The project discusses in brief, the Definition of  robotics and AI.Some of the important topics of discussion covered in this project under Robotics are history of Robots, the mechanics of robots (working principle), Types of robots, Application of robots and future of Robotics.More emphasis is given on types of robots and application of robots.some very important applications of robots have been discussed in detail. In addition to this, some of the topics discussed under Artificial Intelligence include History of AI, Approaches to AI, Applications of AI and future of AI.

 

REFERENCES

Websites: http://www.howstuffworks.com/

                http://www.bbc.co.uk/

               


Ccopyright © 2004. Abhijith HN. All Rights Reserved.