Robot brings you ice cream responds to your instructions natural



It is possible, even likely, that if you read this article on Walkprod-Robotic or how to program the robot or can understand if you really put your mind to it. But for the rest of us (in fact, for most people), and programming is not necessarily a skill they have at hand. Even if you are comfortable with writing code in general, and writing code that gets the robot is very complex and expensive to do exactly what you want to do is (to put it mildly) n is not easy.

Assume how robots work (if we believe show all the stuff of science than ever, which is what we did) is that they can hear you scream them understand what you're about, then follow the instructions that could just as well be human. "As well as can be for humanity" means the understanding of abstract concepts and make inferences when needed, which is the robots, and generally are absolutely great in.

Robots may want to get detailed information on any instructions: If you want a scoop of ice cream, and they need to know what ice cream, where it is, and how to open it, you need to scoop and how to enter a scoop, how to perform the liposuction procedure, and how to verify that the scoop was successful, and how to get a scoop of ice cream Oh wait, forgot the pot, the robot must be understood all the stuff pot in advance.

And there is a problem: "get me a scoop of ice cream" is actually a series of highly complex procedures that must be implemented in the right way, and not man has the patience to explain it as a robot that you want.

Cornell tries to solve this problem by teaching robots to interpret natural language education, and those same casual, so PR2 can bring you some fancy ice cream.



Most robots panic instructions from a mysterious, but the special sauce that has a laboratory robot learning Ashutosh Saxena PR2 to be able to make logical inferences about missing steps or poor instructions. From the press release:

Android Saxena, and equipped with a 3-D scans its environment and identify objects using computer vision software previously developed in the laboratory of Saxena. Android has been trained to associate objects of their capabilities: can be poured into the mold or paid; Pans may be provided for other purposes, and may be heated things. Therefore, the robot can identify the saucepan, place the water tap and the stove and the inclusion of such information in its procedures. If you tell it to "heat water" it can use the stove or microwave, depending on what is available. It can perform the same procedures tomorrow if you have moved the stove, or even transfer robot of different cuisine.

And other workers attacked these problems by giving the robot a range of models for joint action and devour sentences one word at a time. Saxena research group uses scientific computing techniques called "machine learning" to train the brain to connect the computer to control the robot with specific actions flexibly. And is fed into the computer simulation video animation work - which was created by humans in a process similar to playing a video game - along with recorded voice commands several different stakeholders.

Computer stores, a combination of commands similar to the configuration of the elastic that can match many variations, so when I heard "Take a pot on the stove", "carrying a pot on the stove "," put the pot on the stove, "and" Go to the pan and heat the pot "and so on, it calculates the probability of a match with what I've heard before, and if the probability is high enough, he said meeting. This is a copy of a similar simulation mysterious video provides an action plan: Wherever the pelvis and chimney, and can be adapted to a recorded track work to bring a pot of water from to each other.

With variations in the structure of the environment and controls, PR2 could make ice cream and ramen which absorb about 64 percent of the time, which is "three to four times better than previous methods." So it's not bad, and it will improve Android gaining more experience, which can help by crowdsourcing game simulation here.


Features such as these are absolutely necessary for household robots that are used by people who have no idea on how to use Android, but there is another layer here to say that it is equally complex. Android may include relatively (ie programming) Abstract commands such as "Ice Cream Add to pot", but even if the robot knows fetch a bowl of ice cream, he also needs to know what is "pot" and "ice cream" in the first place (as well as "add" mean in terms of size). This is especially difficult if the robot enters the environment where it was not happening before, or if you buy crazy gourmet ice cream that does not come in a gallon bucket.

Solution to these problems is not the purpose of this research is, of course, I'm high, it is useful to consider the necessary measures to meet usability internally by the ice cream end to end $ 400,000 robot. There are many ways that you can solve these problems: the use of natural spoken language interfaces to the initial training of the robot is an option where you turned out to be a bowl and say: "Hey, Android, and this is a bowl of ice, cream got it? "Another option is to use the cloud to access a database of food and / or semantic mapping to start the process of finding objects like pot.

What is interesting is how do you build a research to a point where robots that a basic understanding of human beings, and can act on the basis of this understanding, and will be ready to come into our homes and make reliable things that is useful, without oversight and micro-management, and technology support, or treatment.

VIA Cornell UdeM

Related

simulation 324997640071964793

Post a Comment

Follow Us

Follow Us On YouTube

Hot Products in Week

Hot Products in Month

item