task: a set of literature

So these are my 5 chosen papers for the survey of genealogical visualization. Here, I gathered information, so I can estimate their quality from “outside” – so without concerning the content; as well as a few sentences about their contribution.

In the process of choosing literature, I focused on different techniques and different applications – so not just human family relations. In addition, now there are only two-dimensional representations and a focus on representing many individuals – unlike for instance family psychology, that just treats a few individuals.


Burlacu, B.; Affenzeller, M.; Kommenda, M.; Winkler, S. & Kronberger, G.
Visualization of genetic lineages and inheritance information in genetic programming
Proceedings of the 15th annual conference companion on Genetic and evolutionary computation, 2013, 1351-1358

 

Quality:

This paper was published as a conference/journal paper. On this conference a professor from FIN took part too. The paper’s authors work in the same group, which owns a representation on the internet.

Content:

Burlacu et al. suggest the visualization of genealogies in evolutionary algorithms for the investigation of evolution related phenomena. They present for the investigation of changes in quality and genetic diversity over time a generation layered node-link diagram, which maps the individual’s quality to the colour of the node. Therefore they show an application of the survey’s topic in an area, which is different from human family genealogy.


Bezerianos, A.; Dragicevic, P.; Fekete, J.-D.; Bae, J. & Watson, B.
Geneaquilts: A system for exploring large genealogies
Visualization and Computer Graphics, IEEE Transactions on, IEEE, 2010, 16, 1073-1081

 

Quality:

This paper was published as a journal paper. There is a website for this paper, that contains source code and additional material. Authors are cited 600 to 6.000 times.

Content:

Bezerianos et al. suggest a new matrix visualization technique for large family genealogies and give a set of user tasks for genealogical data.


McGuffin, M. J. & Balakrishnan, R.
Interactive visualization of genealogical graphs
Information Visualization, 2005. INFOVIS 2005. IEEE Symposium on, 2005, 16-23

 

Quality:

Author is cited about 1.600 times. His website is accessible and shows his work. The paper was published as conference paper.

Content:

McGuffin and Balakrishnan present reasons that make it difficult to draw a genealogical graph in a node-link manner. In addition they present novel graph representations based on their insights. To conclude, they present problems that may occur when using a node-link representation.


Kim, N. W.; Card, S. K. & Heer, J.
Tracing genealogical data with timenets
Proceedings of the International Conference on Advanced Visual Interfaces, 2010, 241-248

 

Quality:

It is a conference paper. There are only 50 citations of this authors work on google scholar. This article from 2010 is his first article. On the other hand, there are 35 thousand citations for his first co-author and 8.5 thousand citations of his second co-author.

Content:

Kim et al. present an approach for showing genealogical relations in a temporal context. They use lines for the depiction of the life of individuals and their proximity for expression of marriage concerning proximity.


Rohrdantz, C.; Hund, M.; Mayer, T.; Wälchli, B. & Keim, D. A.
The World’s Languages Explorer: Visual Analysis of Language Features in Genealogical and Areal Contexts
Computer Graphics Forum, 2012, 31, 935-944

 

Quality:

It is a conference/journal paper. Preim and Theisel -professors of OvGU FIN- have written articles for this journal. Authors’ internet representation is accessible. Rohrdantz has been cited 500 times since 2009.

Content:

Rohrdantz et al. present a visualization based on a decomposition of a disc into ring segments for a genealogical hierarchy. They apply this technique to a hierarchy of the human languages.

Punctuation Game

We live in the era of Big Data with storage and transmission capacity measured not just in terabytes but in petabytes (where peta- denotes a quadrillion or a thousand trillion). Data collection is constant and even insidious with every click and every “like” stored somewhere for something. This book reminds us, that data is anything but “raw”, that we shouldn’t think of data as a natural resource but as a cultural one, that needs to be generated protected and interpreted. The book’s essays describe eight episodes in the history of data – (?) from the predigital to the digital. Together they address such issues as the ways (, that different kinds of data and different domains of inquiry are mutually defining, how data are variously “cooked” in the processes of their collection and use,) and conflicts, over what can or can’t be “reduced” to data. Contributors discuss the intellectual history of data, as a concept describe early financial modeling and some unusual sources for astronomical data, discover the prehistory of the database in newspaper clippings and index cards, (Oxford comma) and consider contemporary “dataveillance” of our online habits, (Oxford comma) as well as the complexity of scientific data curation.


During succession ecosystem development occurs, but in the long term absence of catastrophic disturbance a decline phase eventually follows. We studied six long term chronosequences in Australia, Sweden, Alaska, Hawaii, and New Zealand. For each the decline phase was associated with a reduction in tree basal area and an increase in the substrate nitrogen to phosphorus ratio indicating increasing phosphorus limitation over time. These changes were often associated with reductions in litter decomposition rates phosphorus release from litter and biomass and activity of decomposer microbes. Our findings suggest, that the maximal biomass phase reached during succession cannot be maintained in the long term absence of major disturbance, (Oxford comma) and that similar patterns of decline occur in forested ecosystems spanning the tropical temperate and boreal zones.


Facebook’s Graph API is an API for accessing objects and connections in Facebook’s social graph. To give some idea of the enormity of the social graph underlying Facebook it was recently announced, that Facebook has 901 million users and the social graph consists of many types beyond just users. Until recently the Graph API provided data to applications in only a JSON format. In 2011 an effort was undertaken to provide the same data in a semantically enriched RDF format containing Linked Data URIs. This was achieved by implementing a flexible and robust translation of the JSON output to a Turtle output. This paper describes the associated design decisions, the resulting Linked Data for objects in the social graph, (Oxford comma) and known issues.

summary of “Warp Drive Research Key to Interstellar Travel”

The development of the warp-drive for faster-than-light propulsion is known to every eager Star Trek fan: The physicist Zefram Cochrane “invented the warp-drive engine in the year 2063” being hindered by “evil time-traveling aliens”. “It wasn’t easy.” In this way Cochrane established the basis for the interstellar voyages of the starship Enterprise centuries later.

In the real world at NASA’s Johnson Space Center in Houston Harold “Sonny” White investigates the feasibility of building a real warp-drive engine. He assembled an experiment to create tiny distortions in space-time. This experiment may lead to the development of a system that generates a bubble of warped space-time around a spacecraft. Instead to further increase the craft’s top speed this bubble may allow to “sidestep the laws of physics that prohibit faster-than-light travel, allowing to cross the vast interstellar distances in a matter of weeks.

(For the author of the article) It is heartening to see that the federal government also invests in projects like the one of Mr. White that are challenging and even considered impossible by some physicists.

A surprising number of scientists engineers and amateur space enthusiasts believe in interstellar travel. They shared their hopes and hypotheses at academic conferences and founded organizations that seek to lay the groundwork for an unmanned interstellar mission. Their ardor has grown in recent years as astronomers have detected several earth-like planets that orbit stars in a habitable distance, stars that are relatively near to our sun.

“The problem is getting there in a reasonable amount of time.” NASA already has a probe in interstellar space. Voyager 1 launched in 1977 left our solar system in 2012. At its current speed of 38.610 miles per hour it would take 70.000 years to reach any of the nearby stars. “Researches need to make some serious breakthroughs in spacecraft propulsion to get there faster.”

In contrast to Mr. White most interstellar enthusiasts focused their attention to less hypothetical technologies. Icarus Interstellar, for example, is coordinating a study of a mission that would use the energy that is emitted by the fusion of atomic nuclei. “If this energy could be properly controlled and harnessed it could accelerate a probe to […] speeds, thousands of times faster than Voyager 1. But  researchers have been trying to build a fusion power plant for the past fifty years without much success.”

With these enormous speeds even the microscopic interstellar dust causes plenty of damage. “A spacecraft would have to be equipped with heavy shielding” needing more energy to accelerate. On the other hand there is the need to decelerate before reaching the destination. The probe would have to use its engines to slow down. In a nutshell the spacecraft would need to carry an even heavier load of fuel. The complications of interstellar travel seem endless, its difficulty tremendous.

“The dream of interstellar travel remains stubbornly alive.” Symposia are hold and new conferences are hosted. “At these times, when NASA has difficulties to fund its priorities” interstellar travel seems premature. But advocates argue that interstellar travel is essential to humanity’s long-term survival. As long as humans are confined to earth they are at risk of extinction by a planetary catastrophe.

So maybe the fate of humans lies among the stars looking something like Star Trek’s United Federation of Planets.

  • word count: 540
  • time taken:  7 hours 10 minutes

I’ am not sure about this summary. It seems to me more like a shortened version of the original article.

Homework #2

assignment 1: Read.

Zobel’s chapter number 2 has been read (by me and actually annotated, too).


assignment 2: Find titles.

topic 1: IoT

  • survey of urban networks of heterogeneous end systems

topic 2: string test cases

  • diverse and skew distributed random string test cases

Summary of (a part of) my bachelor thesis

This is just about a third of the topics covered in my bachelor thesis. Nevertheless it is quite long. So I guess there are to many details. I admit that being inspired by Schlüko 3 my goal was actually to match it to the form denoted by the titles.


Introduction
My bachelor thesis was elaborated in the context of a simulation system for chemical, thermal and hydrological processes in solid materials.
In computer-based simulation time series are used for the modeling of time-dependent effects on the simulation model. These effects originate from processes of systems that effect the modeled real-world system, but are not modeled themselves.
Time series are generated e.g by weather stations, that regularly gather measurements for properties of the local climate like temperature or humidity, or by gauging stations at rivers, that regularly gather measurements about the river like height of water or its velocity.

These devices, their sensors or their routines may fail. A Station may be out of power, a sensor may be blocked or biased by a leaf, measurement techniques may not work under certain unintended circumstances like extreme rainfall or errors in routines lead to erroneous representations for measurement results. Subsequently in generated time series certain data can be missing, non interpretable or less representative for the effect on the system that should be modeled. When this data is used for the model, the model will be less representative in comparison to a model, which does not contain these errors, and the simulation results will be less reliable.

In order to support the identification of those problems the goal of this work was to develop a visualization (using HTML and a certain JavaScript library) for time series. It should process such data and present the different problems in different manners, so the user is able to distinguish the problems.
Continue reading