· 

How do humanoid robots navigate and what added value does this generate?

Cruzr Navigation
(Source: Ubtech)

Have you ever asked yourself how your robot vacuum cleaner can find its way around your apartment or house? The navigation of humanoid robots is based on similar or even identical technologies. In the center of attention are a number of sensors, which are used for obstacle detection or for cartography of the room. In this blog, we will show you how they work exactly and how you can make use of these functions in your company.

Technologies

In recent years there has been a lot of development in the field of autonomous navigation, which is partly due to the trend towards autonomous mobility. In this article we discuss the two main technologies that are also used in humanoid robots. 

V-SLAM

Visual SLAM systems (Visual Simultaneous Localization & Mapping) enable robots or autonomous vehicles to automatically and autonomously create a 3D map of an unknown environment and localize themselves based on this map. Only a camera is needed for the mapping. Typically, in a visual SLAM system, target values are tracked by successive camera images to measure the 3D position, which is called feature point triangulation. This information is transmitted back to create a 3D map and identify the robot's location. Once the mapping is complete, the robot can define a navigation path to a Point of Interest (POI) based on the 3D map. This enables the robot to navigate efficiently through rooms and avoid possible obstacles.

V-SLAM Karte
V-SLAM Karte (Source: Politecnico di Torino)

LiDAR

LiDAR stands for Light Detection and Ranging and is based on a similar mode of operation to radar or sonar but uses light waves rather than radio or sound waves. The basic operation is relatively simple, a focused beam of light is emitted from the sensor, strikes an object or obstacle and is reflected to the sensor. The time required for this process is measured and the speed of the light (299'792'458 metres per second) is used to calculate the distance and position to the object / obstacle. Depending on the LiDAR sensor and the application, hundreds of thousands of such light beams can be emitted per second. By combining these individual measurements, accurate 3D visualizations called "point clouds" can be calculated (see picture). In order to create a mapping of the environment (similar to the V-SLAM), an Inertial Measurement Unit (IMU) is required, which then combines the acquired data into a map.

LiDAR Point Cloud
LiDAR Point Cloud (Source: Blickfeld)

Where can these technologies be used?

As already mentioned, these two technologies can be used in various areas. Depending on the individual or also together in combination. But no matter which of these technologies is used, both can bring your organization a great added value. Because orientation in complex office buildings or nested hospitals can often be confusing and annoying for your visitors. With the help of a robot guide, you can give them active assistance from the entrance area (check-in/registration) to the desired destination, while offering them a unique experience. As a mobile and digital concierge solution, you can digitalize your complete visitor management and make it cost efficient. As you have read in the previous sections, these solutions are also not industry or location specific and don’t need be installed in a complex way within a building. They are easy to install and even easier to use. 

You are welcome to step by our Office at Technopark Zürich (Office 1027, Transfer Ost) at any time to get a live demonstration. Just call ahead to make sure we’re there.


Interested?

Please contact us for more information.

 

Tel. Nr.: +41 43 204 30 70 or by Mail: info@avatarion.ch