Tag Archives: homework

Fighting for Breath

 

The article is about air pollution and how the pollution relates health problems. He states that this relationship is not completely explored and even the air inside houses is affected. After mentioning the political situation, the author closes with an appeal to the reader to reduce pollution. This seems like a logical structure when  talking about this topic. The length of the paragraphs appears to be balanced and the topics are addressed in appropriate depth.

The article seems to be concise, every point is discussed in enough length and no drawn out paragraph catches the eye. The separation of the text is good. In every paragraph, one idea is discussed. The sentences, in the beginning, are short and straightforward but some of the later ones are too long.

The Argumentation and explanations seem reasonable and I can follow them. I personally like his analogies , but sometimes they are not appropriate. References are missing, but that is to be excepted as this  is no  scientific paper.

 

Algorithm descriptions

LinearSearch(v,l) searches the items in list l for the value v. It returns the position of the value or n if the is not in the list, where n is the length of l. The value is compared to every element in the list, denoted by l(i).

  1. (Initialize counter i.) Set i <- 0.
  2. (loop trough every item in l.)
    1. if l(i) = v return i.
    2. i <- i+1.
  3. return n.

InsertionSort(l) sorts the list l and returns it. It loops through each element of the list, finds its location in the sorted list, and inserts it there. The sorted list consists of every visited item.

  1. (Initialize counter i.) Set i <- 1.
  2. (Sort the list.) Loop trough every item in l.
    1. (Initialize counter j.)j is used to search for the correct position to insert l(i). Set j <- i.
    2.  While j is still a valid index ( j > 0 ) and the predecessor of l(j) is greater than l(j) ( l(j-1) > l(j) ).
      1. Swap l(j) and l(j-1).
      2. j <- j-1.
  3. return l.

MergeSort(l) sorts the list l with a divide-and-conquer approach. That means the data is broken down into parts which are processed individually. After that, the preprocessed parts are merged back together again. Overall, the algorithm has a complexity of O(n log n).

Input: unsorted list l.

Output: sorted list l.

 

The major steps of the algorithm are as follows:

  1. Recursive subdivision of the list until each contains 1 item.
  2. Merge sublists until only one is remaining. This is the sorted list.

We now examine these steps in detail.

  1. (Subdivision.)
    1. If a list is empty or has only one element is sorted by definition. In this case, return l.
    2. left, right <- split(l).
      1. This evenly divides all items of l into left and right. This has a constant complexity, since only the midpoint of l needs to be computed.
    3. left <- MergeSort(left)
    4. right <- MergeSort(right)
      1. Recursive calls are used to further subdivide the sublists. This is done until there is until the whole list is divided into sublists of length one. These sublists are then sorted in the Merge function. This has a complexity of  logarithmic since the inputs for the recursive calls are halved at each recursion step.
    5. return Merge(left,right)

function Merge(left, right)

This help function called by the MergeSort function actually does the sorting. The merge has a linear complexity, because each element of the input lists is merged into the result list exactly once.

  1. result <- empty_list.
  2. If none of both lists is empty merge both lists using a zip lock principle. Both left and right are sorted meaning the first element of both is the smallest item in each list. To merge the sublists, the first elements of both sublists are compared and the smaller one is appended to result.
    1. if head(left) <= head(right) then
      1. result <- result + head(left)
      2. left <- tail(left)
    2. else
      1. result <- result + head(right)
      2. right <- tail(right)
    3. head(list) return the first element of list and tail(list) returns every element of list except the first one.
    4. If one of the sublists has any elements has any elements left simply append these elements to result.
    5. return result. This is a sorted list.

Efficient Graph-based Document Similarity

What phenomena or properties are being investigated?
Quality and speed of an ‘efficient’ Method of ‘Similiar-Document-Search’.
The innovative idea of the method is to ‘semantically expands’ Documents as a pre-processing step and not at search time and use a new similarity measure, which combines hierarchical and transversal information.

Why are those phenomena or properties of interest?
Used in ‘many’ applications: document retrieval, recommendation …

Has the aim of the research been articulated?
I think it is “Improving Similiar-Document-Search”.

What are the specific hypotheses and research questions?
Does the new Method for storing and querying improve speed and quality of the recommendation.

Are these elements convincingly connected to each other?
I think you could separate this paper into two. One for the speed of the retrieval using known similarity measures and one for the quality of the new measure.
But I do not know much about this area maybe those two are interlinked somehow.

To what extent is the work innovative? Is this reflected in the claims?
A new method of storing, retrieval and similarity measures are presented.
The work claims to outperform related works in quality and speed of the retrieval.

What would disprove the hypothesis? Does it have any improbable consequences?
‘Better similarity’ could be disproved on a test on another dataset, with different characteristics.
‘Better speed’ i did not find a proof for that claim. The timings of the approach were reported but not compared to anything.

What are the underlying assumptions? Are they sensible?
Knowledge-Graph based similarity measures are better than word-distribution-based ones = sensible.
Inverted-Index based searches are fast = sensible.
create a candidate set fast and dirty and then use slower but better algorithm on that set is faster than full search without sacrificing too much quality = sensible.
Using two benchmarks are enough to say it outperforms = insensible.
Has the work been critically questioned? have you satisfied yourself that it is sound science?
It was accepted into the 13th ESWC 2016 (European Semantic Web Conference). So at least someone has looked at it.

What forms of evidence are to be used?
Experiments.

How is the evidence to measured? Are the chosen methods of measurement objective, appropriate and reasonable?
They use what sounded like established measures for the quality of a retrieval. They used it on established benchmarks. OK.

For speed they use time-measurement, they do not state what Hardware is used for the experiment or if the different competing approaches even ran on the same hardware and used the same experiment setup. NOT OK.

What are the qualitative aims, and what makes the quantitative measures you have chosen appropriately to those aims?
There is a quantitative measure for quality with higher numbers being better.
Same for speed but with lower numbers being better.

What compromises or simplifications are inherent in your choice of measure?
The time an algorithm takes to execute is dependent on many things (CPU speed, background processes etc.) and in this case also depends on the network speed.

Will the outcome be predictive?
No a different setup or different dataset can bring different results.

What is the argument that will link the evidence to the hypothesis?
Better (higher/lower) numbers than others prove the hypothesis since it just says that it is better than others.

To what extent will positive results persuasively confirm the hypothesis? Will negative results disprove it?
To a small extent since there is only a small number of test data sets. Negative results will disprove the hypothesis since it is so general.

What are the likely weaknesses of or limitations to your approach?
Since the claim is so general it is hard to proof.

Abstract

Multi-Agent Systems model how different autonomous Agents, with limited knowledge, interact with each other in a shared Environment. A usual use case is to use Agents as a Team and give them a goal, which can only be achieved by multiple agents.

There are is a large number of Parameters that the operator of those teams needs to set in order for the team to interact successfully and reach its goal. Therefore, Methods, from the machine-learning domain, are used to automatically explore all possible parameters and find the most optimal ones.

This Survey will concentrate on Research where the Team Size is greater than two or three. At first, I present the history and background of Multi-Agent-Systems. After that, I give an overview of the  of the Problems, which arise when large teams and machine-learning are used together. After that, I show existing algorithms, which can cope with those problems. Finally, I discuss what further research can be done in the area.

Punctuation Homework

  • We live in the era of Big Data with storage and transmission capacity measured not just in terabytes but in petabytes (where peta- denotes a quadrillion or a thousand trillion). Data collection is constant and even insidious, with every click and every “like” stored somewhere for something. This book reminds us that data is anything but “raw” that we shouldn’t think of data as a natural resource, but as a cultural one that needs to be generated protected and interpreted. The book’s essays describe eight episodes in the history of data from the predigital to the digital. Together they address such issues as: the ways that different kinds of data and different domains of inquiry are mutually defining how data are variously “cooked” in the processes of their collection and use and conflicts over what can or can’t be “reduced” to data. Contributors discuss the intellectual history of data as a concept, describe early financial modeling and some unusual sources for astronomical data, discover the prehistory of the database in newspaper clippings and index cards, and consider contemporary “dataveillance” of our online habits as well as the complexity of scientific data curation.
  • During succession, ecosystem development occurs but in the long term absence of catastrophic disturbance a decline phase eventually follows. We studied six long term chronosequences in Australia, Sweden, Alaska, Hawaii, and New Zealand; for each the decline phase was associated with a reduction in tree basal area and an increase in the substrate nitrogen to phosphorus ratio, indicating increasing phosphorus limitation over time. These changes were often associated with reductions in litter decomposition rates, phosphorus release from litter and biomass, and activity of decomposer microbes. Our findings suggest that the maximal biomass phase reached during succession cannot be maintained in the long term absence of major disturbance and that similar patterns of decline occur in forested ecosystems spanning the tropical temperate and boreal zones.
  • Facebook’s Graph API is an API for accessing objects and connections in Facebook’s social graph. To give some idea of the enormity of the social graph underlying Facebook it was recently announced that Facebook has 901 million users and the social graph consists of many types beyond just users. Until recently, the Graph API provided data to applications in only a JSON format. In 2011, an effort was undertaken to provide the same data in a semantically enriched RDF format, containing Linked Data URIs. This was achieved by implementing a flexible and robust translation of the JSON output to a Turtle output. This paper describes the associated design decisions, the resulting Linked Data for objects in the social graph, and known issues.

Research Homework

I decided to narrow down my original topic of “Multi-Agent Systems” to something like “How can a Team of Agents learn to achieve a goal cooperatively”.

  1. Shoham, Y., & Leyton-brown, K. (2009). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. ReVision, 54(1-4), 513 p. – The Foundations are important, it also gives a good overview of the whole field.
  2. Stone, P., & Veloso, M. M. (2000). Multiagent Systems: A Survey from a Machine Learning Perspective. Autonomous Robots, 8(3), 345-383. –  Survey on how Machine Learning is used in Multi-Agent Systems in general.
  3. Panait. (2005). Cooperative Multi-Agent Learning: The State of the Art. Autonomous Agents and Multi-Agent Systems, 11, 387-434. – Survey on how a team of agents can learn to cooperate to achieve a goal.
  4. Byrski, A., Dreżewski, R., Siwik, L., & Kisiel-Dorohinicki, M. (2015). Evolutionary multi-agent systems. The Knowledge Engineering Review, 30(02), 171-186.  – Describes an Evolutionary Approach to Multi-Agent Learning.
  5. Buşoniu, L., Babuška, R., & De Schutter, B. (2008). A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews. – Describes how to use RL to teach a team of Agents.

Homework: Summary of “Warp Drive Research Key to Interstellar Travel”

Faster-than-light propulsion is an important part many science-fiction works, with ‘Star Trek’ being one most well-known.

The NASA is working on an experiment to determine the feasibility of real-world Warp-Drives. It is conducted by Harold “Sonny” White, the head of the Johnson Space Center’s advanced propulsion programme. Since it is not possible for an object to travel faster than light White tries to side-step the laws of physics by distorting the space-time and thereby getting from A to B faster than light, without actually accelerating the spacecraft beyond lightspeed.

The project is criticised by researchers, who say that this is obviously impossible to achieve. It also receives only a small amount of funding, but the fact that it gets funding at all gives the project some merit.

But White is not alone. Fueled by dreams and recent discoveries of probably-habitable planets lots of scientists and engineers try to make interstellar travel a reality. Those non-government organisations, namely “the 100 Year Starship project”, the “Tau Zero Foundation” and “Icarus Interstellar” are trying practical approaches to tackling this problem. “Icarus Interstellar” for example wants to use fusion generators to power space ships which would increase the speed of spaceships by the factor of thousand compared to today’s propulsion systems. But to this day fusion technology is waiting for a breakthrough and there are no working prototypes of a fusion generator.

These higher speeds are needed because currently available propulsion systems would need more than 70,000 years to reach the next star with habitable planets. There are other problems as well. The interstellar space is not really empty and destructive high-speed collisions with microscopic objects call for heavy shielding. This and the need for active deceleration of the spacecraft at the target location increases the amount of fuel that needs to be carried along.

While these Problems seem so overwhelming that they might explain why we never met extraterrestrial life, non-government organisations increase their efforts to colonise space. Advocates of these organisations point out that it might be critical to the survival of our species since we can currently go extinct if a planetary catastrophe happens and since the terraforming of a whole planet like Mars is even

homework – structure of scientific manuscript

Aufgabe 1: check
Aufgabe 2:
1.: System Architecture for the Urban IoT.
2.: Automatic generation of string test cases via MO-optimization.

Aufgabe 3:

Edit: Multi-Agent Systems

(example based) shape synthesis – I think I would like that the most. like this, but i do not have a  good enough overview to say if there is enough to write about. This also includes shape analysis and classification the applications would be procedural generation or shape morphing.

Non-photorealistic Rendering – also a topic I am interested in, there should definitively enough to write about.

Deep Learning – with a focus on computer graphics application, see here.

Visuelle Lokalisierung für den rob@work 3

Abstract
This Thesis is about Localization help of visual Landmarks, to improve to the localization of Robots in unstructured Environments. Since a Sensor-Fusion-Localization is already available, it will be investigated if a conventional Method of visual localization can be integrated into the Sensor-Fusion-Localization. The functionality of this integration will be verified.

Introduction
In the Motivation, I talk about shortly about the formal definition of localization and that it is achieved with the help of sensors. Then I commence introducing the topic of sensor fusion to the reader. First, I discuss the problems of using only a single sensor, the biggest of which is that it completely stops working if the robot is used in environments not suited to the sensor. Then I continue with homogeneous sensor fusion which means using more of the same type of sensor, it can mitigate some of the minor problems of only using one sensor, but not the aforementioned biggest problem. This can only be overcome by heterogeneous sensor fusion which combines sensors of different classes so if one sensor fails overs can take over the job.
After that go over to specify the circumstances under which the thesis was created and the goals it should fulfill. To summarize the Fraunhofer IPA had an industrial assistance robot project which is in use by customers. It uses only homogeneous sensor fusion with LIDAR sensors but a new Requirement came up where the robot should be able to navigate in environments where Laser Sensors usually fail. My Task was to create a prototype of a visual localization which could be integrated into the existing sensor fusion localization and to evaluate it as stand-alone and in concert with other sensors.

Basics
In this Chapter, I renew basic computer vision knowledge which is relevant to the thesis i.e. how to extract feature points from images and methods to estimate their positions, in 3D-Space, relative to the camera.

Related work
Here I give an overview of work which is used the methods described in the basics chapter to localize a robot. I point out the differences in the requirement and try to explain why I decided to base my implementation on the papers I chose in the end.

Concept
In this chapter, I explain the existing sensor fusion localization, which works with abstracted features. There are point- ,line and pose features. Each plugin for the sensor fusion localization has to decide for a feature-type and must have a way to create a feature map. A feature map must associate each feature with additional information which allows for the recognition of the abstract feature and its relative position to the robot.
if the absolute map position and relative position of a feature are known one can deduce the position of the robot. So when the robot tries to localize itself every plugin tries to associate seen features with known features from the feature map. These associations are then fed into an extended Kalman filter which then outputs the final position of the robot.

After that, I explain how I integrated a visual Localization Method into this System. I decided to use so-called “Landmark shots”. A “Landmark shot” contains a pose, which represents the position at which the picture was taken, this is used for the feature. Further, it contains lots of recognizable feature points which allow the robot to associate seen features with that specific “Landmark shot” and to compute its position relative to the position, from where original “Landmark shot” was taken.

Implementation
In this chapter, I talk about how I implemented the plugin and the software I used to do that, namely ROS and OpenCV.

Evaluation
The Evaluation took place in a bureau environment, that was suboptimal since such an environment is well suited to cameras and the existing LIDAR-Localization was already working well. Nonetheless, it was possible to confirm that the plugin was working.

Conclusion
Summary and what could be improved. Mostly about Speed.

————————————————————————————-

What did I like?
Finding a red thread through the work so that every chapter grip into each other.

What was difficult?
Writing a lot. Some of the things I just did because it worked at the time, but that made it hard to find reasons for it afterward.

Satisfied?
No, not really.