Visuelle Lokalisierung für den rob@work 3

This Thesis is about Localization help of visual Landmarks, to improve to the localization of Robots in unstructured Environments. Since a Sensor-Fusion-Localization is already available, it will be investigated if a conventional Method of visual localization can be integrated into the Sensor-Fusion-Localization. The functionality of this integration will be verified.

In the Motivation, I talk about shortly about the formal definition of localization and that it is achieved with the help of sensors. Then I commence introducing the topic of sensor fusion to the reader. First, I discuss the problems of using only a single sensor, the biggest of which is that it completely stops working if the robot is used in environments not suited to the sensor. Then I continue with homogeneous sensor fusion which means using more of the same type of sensor, it can mitigate some of the minor problems of only using one sensor, but not the aforementioned biggest problem. This can only be overcome by heterogeneous sensor fusion which combines sensors of different classes so if one sensor fails overs can take over the job.
After that go over to specify the circumstances under which the thesis was created and the goals it should fulfill. To summarize the Fraunhofer IPA had an industrial assistance robot project which is in use by customers. It uses only homogeneous sensor fusion with LIDAR sensors but a new Requirement came up where the robot should be able to navigate in environments where Laser Sensors usually fail. My Task was to create a prototype of a visual localization which could be integrated into the existing sensor fusion localization and to evaluate it as stand-alone and in concert with other sensors.

In this Chapter, I renew basic computer vision knowledge which is relevant to the thesis i.e. how to extract feature points from images and methods to estimate their positions, in 3D-Space, relative to the camera.

Related work
Here I give an overview of work which is used the methods described in the basics chapter to localize a robot. I point out the differences in the requirement and try to explain why I decided to base my implementation on the papers I chose in the end.

In this chapter, I explain the existing sensor fusion localization, which works with abstracted features. There are point- ,line and pose features. Each plugin for the sensor fusion localization has to decide for a feature-type and must have a way to create a feature map. A feature map must associate each feature with additional information which allows for the recognition of the abstract feature and its relative position to the robot.
if the absolute map position and relative position of a feature are known one can deduce the position of the robot. So when the robot tries to localize itself every plugin tries to associate seen features with known features from the feature map. These associations are then fed into an extended Kalman filter which then outputs the final position of the robot.

After that, I explain how I integrated a visual Localization Method into this System. I decided to use so-called “Landmark shots”. A “Landmark shot” contains a pose, which represents the position at which the picture was taken, this is used for the feature. Further, it contains lots of recognizable feature points which allow the robot to associate seen features with that specific “Landmark shot” and to compute its position relative to the position, from where original “Landmark shot” was taken.

In this chapter, I talk about how I implemented the plugin and the software I used to do that, namely ROS and OpenCV.

The Evaluation took place in a bureau environment, that was suboptimal since such an environment is well suited to cameras and the existing LIDAR-Localization was already working well. Nonetheless, it was possible to confirm that the plugin was working.

Summary and what could be improved. Mostly about Speed.


What did I like?
Finding a red thread through the work so that every chapter grip into each other.

What was difficult?
Writing a lot. Some of the things I just did because it worked at the time, but that made it hard to find reasons for it afterward.

No, not really.

One thought on “Visuelle Lokalisierung für den rob@work 3”

  1. Hey Tobias,

    At your Introduction I like that you start with an explanation of your motivation and the problem you want to solve. I know that you mentioned the name of the robot in the title of your abstract and that the robot is related to the Fraunhofer Institute, but I can’t imagine what kind of robot you mean. So I think it is interesting for me to know what kind of robot you mean.
    After this I’ve got the impression that you can express your goal very well and know what you have to do. Your following structure seems very logical to me. You start to explain the basic literature and after that you take a look at related work. For me it is understandable and very useful to explain what functions the robot receive and how you want to expand his functionality in relation with the basics you present in the chapter before. After the theoretical part you start talking about the implementation of the software you use. For me it is interesting to get some information about the software you use for the implementation and not only the names because the names say nothing to me how it works or what you can do with the software.
    I think the evaluation of your project is very useful. You think about the surroundings of your test and about everything that was good, bad and what can be improved.

Leave a Reply

Your email address will not be published. Required fields are marked *