Algorithm

I finally completed the homework of session 9.

Choose a simple algorithm and a standard description of it, such as linear search in a sorted array. Rewrite the algorithm in prosecode. Repeat the exercise with a more interesting algorithm, such as heapsort. Now choose an algorithm with an asymptotic cost analysis. Rewrite the algorithm as literate code, incorporating the important elements of the analysis into the algorithm’s description.

You can find my algorithm descriptions here: algorithms

Fighting for breath – Style analysis

The article is published in a radio listing/newspaper, so we cannot expect the same standards for style as in scientific work, which we have discussed in the seminar. The article misses references and shows an overall more casual style. We would see this as flaws in scientific writing but can accept it in this type of literature.

The casual style is expressed in the use of contractions (“isn’t”, “wasn’t, …) and exclamation marks (“near my home!”, “on the back of the truck!”). Furthermore, some vague analogies are used, which might be unfamiliar to non-native speakers (“thick like pea soup”, “political minefield”, “uphill struggle”), to achieve a more colorful description. Naturally the title follows this casual style as well, as it is more catchy than informative. Note that these points are acceptable when writing popular science but should be avoided in real scientific work.

Other guidelines however, hold for both kinds of literature. So in the following I will point out where the article adheres or violates these more general applicable guidelines. The article has a clear read thread which is easy to follow. This is achieved by using simple sentence structure and simple language. Furthermore, the balance of sentence and paragraph length is chosen well, so it is neither monotone nor disruptive to read. But the article also comprises some minor flaws. Sometimes it gives unnecessary information like the numbers of cars, vans and trucks in traffic which merely illustrate the huge amount of vehicles producing exahust gasses. Moreover, the article makes extensive use of parenthesis to give extra information which sometimes breaks the reading flow. Lastly, many words are printed in italics just for emphasis without the need for it.

To conclude, the article shows an overall good style for popular science because it provides a clear read thread and uses simple sentence structure and language but it has minor flaws which can break the reading flow.

Zobel’s Checklist

Since I don’t have a master thesis topic yet, I used the paper:

Paul, C., et.al.: Efficient Graph-Based Document Similarity. In:
Proceedings of the ESWC 2016, http://doi.org/10.1007/978-3-319-34129-3_21

Regarding hypotheses and questions,

  • What phenomena or properties are being investigated? Why are they of interest?
    • The paper investigates automated document similarity meassurement. This is for example used in article recomendations on newspaper websites.
  • Has the aim of the research been articulated? What are the specific hypotheses and research questions? Are these elements convincingly connected to each other?
    • The aim is to present a graph-based algorithm to measure semantic similarity of documents, that:
      • (i) provides higher correlation with human notion of similarity than similar approaches
      • (ii) first hypothesis also holds for small documents with few annotations
      • (iii) is more efficient than other graph based approaches
  • To what extent is the work innovative? Is this reflected in the claims?
    • The algorithm is said to be more efficient than other graph-based algorithms. The used similarity measure is said to correlate more with human notion than comparable ones.
  • What would disprove the hypothesis? Does it have any improbable consequences?
    • (i) and (ii) similar approaches provide equal or higher correlation with human notion
    • (iii) An existing graph-based algorithm that is equally or more efficient with similar or better results
  • What are the underlying assumptions? Are they sensible?
    • Semantically annotated documents as are available as input. This is sensible because it helps to focus on comparing the documents rather than analyzing them.
  • Has the work been critically questioned? Have you satisfied yourself that it is sound science?
    • The paper shows that the proposed algorithm is better than some selected other ones on selected data. As I am not into the topic of semantic document comparison I cannot say whether the selected data and reference algorithms are representative. Additionally, the paper never states the limitations of the proposed algorithm. So the results do not appear very trustworthy to me.

Regarding evidence and measurement,

  • What forms of evidence are to be used? If it is a model or a simulation, what demonstrates that the results have practical validity?
    • An experiment on 2 datasets (one standard benchmark set and another with small documents)
  • How is the evidence to be measured? Are the chosen methods of measurement objective, appropriate, and reasonable?
    • The authors use standard metrics that were also used to evaluate the reference algorithms. So the 3 criteria seem to be fulfilled.
  • What are the qualitative aims, and what makes the quantitative measures you have chosen appropriate to those aims?
    • The authors want to show that their proposed algorithm comes closer to human notion of similarity and works more efficient than reference algorithms. Using the standard metrics is appropriate for that.
  • What compromises or simplifications are inherent in your choice of measure?
    • I am not well familliar with the used meassures, so I cannot say anything here.
  • Will the outcomes be predictive?
    • Yes, the hypotheses predict higher similarity meassures and less excecution time for the proposed algorithm in comparison to similar ones.
  • What is the argument that will link the evidence to the hypothesis?
    • The quantitative measures allow to directly compare the performance of the proposed algorithm with reference algorithms. (Is it faster? Is it closer to human notion?)
  • To what extent will positive results persuasively confirm the hypothesis? Will negative results disprove it?
    • Since the authors do not state any constraints, they indirectly claim their algorithm performs better than reference ones in any circumstances. There for a positive result in an experiment may strongly support their hypothesis but not totally confirm it. However, negative results will directly disprove their hypothesis.
  • What are the likely weaknesses of or limitations to your approach?
    • I could not find any statements of the authors regarding weaknesses or limitations of their proposed approach

Abstract of: Locomotion in immersive virtual environments

Abstract:

Changing the viewpoint is one of the basic interaction tasks in virtual environments (VEs). In traditional desktop setups the viewpoint is moved gradually by mouse, keyboard or joystick input. Using the same technique in immersive setups, where nearly the complete field of view is stimulated by the VE, is likely to cause motion sickness on users because of a conflict between their visual and vestibular senses. Ideally the user would physically walk around in the VE but this is limited to the tracking space of the setup. So in this paper I survey virtual locomotion techniques for immersive VEs proposed in the literature. Since there is a large variety of immersive VE setups I narrow the scope down to locomotion techniques for setups using only an Head Mounted Display and motion controllers since these have recently become broadly available consumer hardware. To compare the different techniques, I classify the locomotion techniques into a taxonomy and point out quality criteria. In the end I conclude with a recommendation of locomotion techniques for exemplary applications.

Punctuation Game

We live in the era of Big Data with storage and transmission capacity measured not just in terabytes but in petabytes (where peta- denotes a quadrillion or a thousand trillion). Data collection is constant and even insidious with every click and every “like” stored somewhere for something. This book reminds us that data is anything but “raw”; that we shouldn’t think of data as a natural resource but as a cultural one, which needs to be generated, protected, and interpreted. The book’s essays describe eight episodes in the history of data from the predigital to the digital. Together they address such issues as the ways that different kinds of data and different domains of inquiry are mutually defining how data are variously “cooked” in the processes of their collection and use *and conflicts over what can or can’t be “reduced” to data. Contributors discuss the intellectual history of data as a concept, describe early financial modeling and some unusual sources for astronomical data, discover the prehistory of the database in newspaper clippings and index cards, and consider contemporary “dataveillance” of our online habits as well as the complexity of scientific data curation.

* I have no idea where the following part of the sentence belongs to.


During succession, ecosystem development occurs, but in the long term absence of catastrophic disturbance a decline phase eventually follows. We studied six long term chronosequences in Australia, Sweden, Alaska, Hawaii, and New Zealand. For each, the decline phase was associated with a reduction in tree basal area and an increase in the substrate nitrogen to phosphorus ratio; indicating increasing phosphorus limitation over time. These changes were often associated with reductions in litter decomposition rates (phosphorus release from litter and biomass) and activity of decomposer microbes. Our findings suggest that the maximal biomass phase reached during succession cannot be maintained in the long term absence of major disturbance and that similar patterns of decline occur in forested ecosystems spanning the tropical temperate and boreal zones.


Facebook’s Graph API is an API for accessing objects and connections in Facebook’s social graph. To give some idea of the enormity of the social graph underlying Facebook, it was recently announced that Facebook has 901 million users  and the social graph consists of many types beyond just users. Until recently, the Graph API provided data to applications in only a JSON format. In 2011 an effort was undertaken to provide the same data in a semantically enriched RDF format containing Linked Data URIs. This was achieved by implementing a flexible and robust translation of the JSON output to a Turtle output. This paper describes the associated design decisions, the resulting Linked Data for objects in the social graph, and known issues.

Homework – Research, References, and Citation

Here I will present my top 5 references for my research topic “Locomotion in immersive virtual environments”. Note that this is only a small selection of the interesting titles (about 50) I found until now. Additionally, I have not read all of them in depth yet, so their value for my project is not perfectly clear.

Side note:
The terms “locomotion”-, “travel”- and “motion”- techniques may be used synonymously. They are generic terms for an interaction technique used in a virtual environment to change the viewpoint of the user (in a natural way).

[1]     Bowman, D. a., Koller, D., and Hodges, L. F. Travel in immersive virtual environments: an evaluation of viewpoint motion control techniques. In IEEE 1997 Annual International Symposium on Virtual Reality, 45–52. DOI=10.1109/VRAIS.1997.583043.

This article provides a taxonomy of motion techniques and compares some selected ones. Thus it may give valuable guidelines on how to generally compare motion techniques.

[2]     Harm, D. L. 2002. Motion sickness neurophysiology, physiological correlates, and treatment. In Handbook of virtual environments. Design, implementation, and applications, K. M. Stanney, Ed. Human factors and ergonomics. Lawrence Erlbaum Associates, Mahwah, NJ, 637–661.

The handbook for virtual environments gives in depth information on designing virtual environments. This particular chapter covers the causes of motion sickness and possible treatments. These information may help to understand how to avoid motion sickness when moving in an immersive virtual environment.

[3]     Laura Lynn Arns. 2002. A new taxonomy for locomotion in virtual environments.

This Phd. Thesis may not be the best quality resource (grey literature) but it seems to provide in depth information on the design as well as a taxonomy of locomotion techniques. References in this thesis might also lead to higher quality resources.

[4]     Riecke, B. E. 2010. Compelling Self-Motion Through Virtual Environments without Actual Self-Motion – Using Self-Motion Illusions (“ Vection ”) to Improve User Experience in VR. Virtual Reality, 149–176.

To make the user believe he is moving, while he is actually not, is one of the main challenges for locomotion in a virtual environment. This article provides an in depth discussion on how to use (visually induced) self-motion illusions, so called “vection” exactly for this purpose.

[5]     Steed, A. and Bowman, D. A. 2013. Displays and Interaction for Virtual Travel. In Human Walking in Virtual Environments, F. Steinicke, Y. Visell, J. Campos and A. Lécuyer, Eds. Springer New York, New York, NY, 147–175. DOI=10.1007/978-1-4419-8432-6_7.

The book “Human Walking in Virtual Environments” contains guidelines and approaches to enable (natural) walking in VR. I chose especially this chapter because it gives a general overview on the required hardware for input and output as well as an outline on travel techniqurs suited for immersive virtual environments. You may recognize the second author (D. A. Bowman) from the first entry of my list. He is one of the main authors in the field of human computer interaction.

 

 

Summary of “Warp Drive Research Key to Interstellar Travel”

In the article “Warp Drive Research Key to Interstellar Travel”, published on scientific american blog on 23rd April 2014, Mark Alpert writes about the current challanges in the research of interstellar travel.

He begins with the story of Zefram Cochrane, a fictional physicist of the Star Trek universe who invented the warp-drive in the year 2063 allowing the interstellar voyages of the starship Enterprise. This story leads to a real phyiscist, working in NASA’s Johnson space center in Houston, who is researching on the exact same topic. Harold “Sonny” White designed a tabletop experiment to create tiny distortions in spacetime. If his experiment succedes it could lay the foundation of a system that allows spacecrafts to sidestep the physical speedlimit of lightspeed. Instead of increasing the speed.of the spacecraft, a bubble of warped spacetime is formed around the craft so that it could cross the vast distances between stars in a matter of weeks.

Mark writes that it is heartening to know that, besides the critisim of other physicists who do not believe in the success of White’s idea, the government has spent $50,000 anyway to explore this possibility to fullfill the dream of interstellar travel. A dream that is shared by a suprising number of people who hold academic conferences on this topic and found organizations like the 100 Year Starship project, the Tau Zero Foundation and Icarus Interstellar, that seek to lay the groundwork for an unmanned interstellar mission that could be launched by the end of the century. This would be helpful to explore the slew of earthlike planets habitable for humans discovered by astronomers over the recent years.

With traditional technology probes would take thousands of years to reach planets in other solar systems. As an example the article mentions NASA’s Voyager 1, traveling  at 38,610 miles per hour,  that has left our solar system in 2012 after completing it’s primary mission to investigete the Jupiter, Saturn and their moons. With that speed it would take 70,000 years until Voyager 1 reaches any of the nearby stars that might harbor habitable planets.

As an alternative to the warp-drive, the article referes to a mission prooposed by Icarus Interstellar using nuclear fusion power for propulsion which is considered as a more realistic approach by many interstellar enthusiasts. Used properly it would allow speeds thousand of times faster than the Voyager 1. But the technology is not ready yet as researchers tried the last 50 years to use it in a power plant without success. Also the huge amount of fuel requiered for traveling these vast distances presents researchers with a big problem which is aggrevated by the heavy shielding, needed to protect the spacecraft against stardust collisions at high speeds, and decelerating from these high speeds. With regard to these enormous difficulties Mark tries to explain the paradox first noted by physiscist Enrico Fermi in 1950 that even if intelligent life in universe is common, extraterrestrials perhaps never visited earth because it is so hard to get here.

The article finishes with an argument by advocates of interstellar travel which states that planetary catastrophes threaten the long term survival of the human race, thus we must find a solution despite the difficulties. With this argument Mark comes back to the Star Trek analogy and states that we need to adopt the motto of starship Enterprise: “to boldly go where no man has gone bewfore”.