Hello Guest. Sign Up to view and download full seminar reports               

SEMINAR TOPICS CATEGORY

Topics Tagged ‘Robot’

Robots are Learning to Speak Body Language

Added on: March 29th, 2018 by Afsal Meerankutty No Comments

In the event that your companion says she feels loose, yet you see that her clench hands are gripped, you may question her genuineness. But Robots, may trust her. Non-verbal communication says a ton, yet even with propels in PC vision and facial acknowledgment innovation, robots battle to see inconspicuous body development and can miss critical meaningful gestures therefore.

Specialists at Carnegie Mellon University built up a body-following framework that may help take care of this issue. Called OpenPose, the framework can track body development, including hands and face, progressively. It utilizes PC vision and machine figuring out how to process video outlines, and can even monitor numerous individuals all the while. This capacity could ease human-robot associations and make ready for more intelligent virtual and increased reality and in addition natural UIs.

One remarkable component of the OpenPose framework is that it can track a man’s head, middle, and appendages as well as individual fingers. To do that, the analysts utilized CMU’s Panoptic Studio, a vault fixed with 500 cameras, where they caught body postures at an assortment of edges and after that utilized those pictures to construct an informational collection.

They at that point went those pictures through what is known as a keypoint indicator to distinguish and mark particular body parts. The product likewise figures out how to relate the body parts with people, so it knows, for instance, that a specific individual’s hand will dependably be near his or her elbow. This makes it conceivable to track different individuals on the double.

The pictures from the arch were caught in 2D. Yet, the analysts took the recognized keypoints and triangulated them in 3D to help their body-following calculations to see how each stance shows up from alternate points of view. With the greater part of this information handled, the framework can decide how the entire hand looks when it’s in a specific position, regardless of whether a few fingers are clouded.

Since the framework has this informational collection to draw from, it can keep running with just a single camera and a PC. It never again requires the camera-lined vault to decide body postures, making the innovation versatile and open. The specialists have just discharged their code to the general population to empower experimentation.

They say this innovation could be connected to a wide range of communications amongst people and machines. It could assume an immense part in VR encounters, permitting better discovery of the client’s physical development with no extra equipment, similar to stick-on sensors or gloves.

It could likewise encourage more normal collaborations with a home robot. You could advise your robot to “lift that up,” and it could promptly comprehend what you’re pointing at. By seeing and deciphering your physical signals, the robot may even figure out how to peruse feelings by following non-verbal communication. So when you’re noiselessly crying with your face in your grasp on the grounds that a robot has accepted your position, it may offer you a tissue.

Courtesy : – https://spectrum.ieee.org/video/robotics/robotics-software/robots-learn-to-speak-body-language

Humanoid Robot

Added on: March 23rd, 2012 by 4 Comments

The field of humanoids robotics is widely recognized as the current challenge for robotics research .The humanoid research is an approach to understand and realize the complex real world interactions between a robot, an environment, and a human. The humanoid robotics motivates social interactions such as gesture communication or co-operative tasks in the same context as the physical dynamics. This is essential for three-term interaction, which aims at fusing physical and social interaction at fundamental levels.

People naturally express themselves through facial gestures and expressions. Our goal is to build a facial gesture human-computer interface fro use in robot applications. This system does not require special illumination or facial make-up. By using multiple Kalman filters we accurately predict and robustly track facial features. Since we reliably track the face in real-time we are also able to recognize motion gestures of the face. Our system can recognize a large set of gestures (13) ranging from “yes”, ”no” and “may be” to detecting winks, blinks and sleeping.

Access Premium Seminar Reports: Subscribe Now



Sign Up for comprehensive seminar reports & presentations: DOCX, PDF, PPTs.