Homework: Summary of “Warp Drive Research Key to Interstellar Travel”

Faster-than-light propulsion is an important part many science-fiction works, with ‘Star Trek’ being one most well-known.

The NASA is working on an experiment to determine the feasibility of real-world Warp-Drives. It is conducted by Harold “Sonny” White, the head of the Johnson Space Center’s advanced propulsion programme. Since it is not possible for an object to travel faster than light White tries to side-step the laws of physics by distorting the space-time and thereby getting from A to B faster than light, without actually accelerating the spacecraft beyond lightspeed.

The project is criticised by researchers, who say that this is obviously impossible to achieve. It also receives only a small amount of funding, but the fact that it gets funding at all gives the project some merit.

But White is not alone. Fueled by dreams and recent discoveries of probably-habitable planets lots of scientists and engineers try to make interstellar travel a reality. Those non-government organisations, namely “the 100 Year Starship project”, the “Tau Zero Foundation” and “Icarus Interstellar” are trying practical approaches to tackling this problem. “Icarus Interstellar” for example wants to use fusion generators to power space ships which would increase the speed of spaceships by the factor of thousand compared to today’s propulsion systems. But to this day fusion technology is waiting for a breakthrough and there are no working prototypes of a fusion generator.

These higher speeds are needed because currently available propulsion systems would need more than 70,000 years to reach the next star with habitable planets. There are other problems as well. The interstellar space is not really empty and destructive high-speed collisions with microscopic objects call for heavy shielding. This and the need for active deceleration of the spacecraft at the target location increases the amount of fuel that needs to be carried along.

While these Problems seem so overwhelming that they might explain why we never met extraterrestrial life, non-government organisations increase their efforts to colonise space. Advocates of these organisations point out that it might be critical to the survival of our species since we can currently go extinct if a planetary catastrophe happens and since the terraforming of a whole planet like Mars is even

homework – structure of scientific manuscript

Aufgabe 1: check
Aufgabe 2:
1.: System Architecture for the Urban IoT.
2.: Automatic generation of string test cases via MO-optimization.

Aufgabe 3:

Edit: Multi-Agent Systems

(example based) shape synthesis – I think I would like that the most. like this, but i do not have a  good enough overview to say if there is enough to write about. This also includes shape analysis and classification the applications would be procedural generation or shape morphing.

Non-photorealistic Rendering – also a topic I am interested in, there should definitively enough to write about.

Deep Learning – with a focus on computer graphics application, see here.

Visuelle Lokalisierung für den rob@work 3

Abstract
This Thesis is about Localization help of visual Landmarks, to improve to the localization of Robots in unstructured Environments. Since a Sensor-Fusion-Localization is already available, it will be investigated if a conventional Method of visual localization can be integrated into the Sensor-Fusion-Localization. The functionality of this integration will be verified.

Introduction
In the Motivation, I talk about shortly about the formal definition of localization and that it is achieved with the help of sensors. Then I commence introducing the topic of sensor fusion to the reader. First, I discuss the problems of using only a single sensor, the biggest of which is that it completely stops working if the robot is used in environments not suited to the sensor. Then I continue with homogeneous sensor fusion which means using more of the same type of sensor, it can mitigate some of the minor problems of only using one sensor, but not the aforementioned biggest problem. This can only be overcome by heterogeneous sensor fusion which combines sensors of different classes so if one sensor fails overs can take over the job.
After that go over to specify the circumstances under which the thesis was created and the goals it should fulfill. To summarize the Fraunhofer IPA had an industrial assistance robot project which is in use by customers. It uses only homogeneous sensor fusion with LIDAR sensors but a new Requirement came up where the robot should be able to navigate in environments where Laser Sensors usually fail. My Task was to create a prototype of a visual localization which could be integrated into the existing sensor fusion localization and to evaluate it as stand-alone and in concert with other sensors.

Basics
In this Chapter, I renew basic computer vision knowledge which is relevant to the thesis i.e. how to extract feature points from images and methods to estimate their positions, in 3D-Space, relative to the camera.

Related work
Here I give an overview of work which is used the methods described in the basics chapter to localize a robot. I point out the differences in the requirement and try to explain why I decided to base my implementation on the papers I chose in the end.

Concept
In this chapter, I explain the existing sensor fusion localization, which works with abstracted features. There are point- ,line and pose features. Each plugin for the sensor fusion localization has to decide for a feature-type and must have a way to create a feature map. A feature map must associate each feature with additional information which allows for the recognition of the abstract feature and its relative position to the robot.
if the absolute map position and relative position of a feature are known one can deduce the position of the robot. So when the robot tries to localize itself every plugin tries to associate seen features with known features from the feature map. These associations are then fed into an extended Kalman filter which then outputs the final position of the robot.

After that, I explain how I integrated a visual Localization Method into this System. I decided to use so-called “Landmark shots”. A “Landmark shot” contains a pose, which represents the position at which the picture was taken, this is used for the feature. Further, it contains lots of recognizable feature points which allow the robot to associate seen features with that specific “Landmark shot” and to compute its position relative to the position, from where original “Landmark shot” was taken.

Implementation
In this chapter, I talk about how I implemented the plugin and the software I used to do that, namely ROS and OpenCV.

Evaluation
The Evaluation took place in a bureau environment, that was suboptimal since such an environment is well suited to cameras and the existing LIDAR-Localization was already working well. Nonetheless, it was possible to confirm that the plugin was working.

Conclusion
Summary and what could be improved. Mostly about Speed.

————————————————————————————-

What did I like?
Finding a red thread through the work so that every chapter grip into each other.

What was difficult?
Writing a lot. Some of the things I just did because it worked at the time, but that made it hard to find reasons for it afterward.

Satisfied?
No, not really.