Because the size of the PDF exceeds the maximum upload limit of 3 MB the final paper must be downloaded from Overleaf (Please klick on PDF in the menubar on the top).
Organization, Balance, Paragraphing and Sentence Structure
Most rules of simple, accurate and efficient writing were followed:
The title “Fighting for breath” is short and catchy but too general. But the subtitle is much better. It clearly and specifically states the articles topic “the severe threat to health posed by air pollution and the need to take action” and its author Dr Mark Porter.
With roughly 700 words the article is relatively short but long enough to contain a comprehensive overview of the topic. The text is fairly balanced, except for the opening paragraphs who seem short and rushed, throwing the reader right in at the deep end without a real introduction.
The first paragraph is of great value for the reader by taking a historical example to show the severe consequences of air pollution. But then the second paragraph slows down the flow, burdening the reader with dull facts on traffic and air quality in the United Kingdom that serve no real purpose. I think, this paragraph is unnecessary and executed poorly. It fails to get the point across because readers cannot grasp the numbers of vehicles on the roads and the vague description of the decrease in some air pollutants doesn’t contribute either. A different introduction to the current problem of air pollution might have been better suited.
Overall, there is mostly just one idea per sentence and one topic per section. The organization of the text is simple and straightforward. The paragraphs are also not too long. Sentences are short and simple, with two exceptions: The first sentence in the third paragraph about the influence of weather on ozone levels has over 60 words. And the last two sentences in the second last paragraph on indoor toxicity levels are also bloated, each containing over 40 words. Each of these sentences could have easily been split into two, further improving the readability.
Tone, Stylistic Devices and Choice of Words
There is a broad variation of rhetorical devices and choice of words in this text. Some could be considered unfit for scientific writing but might be justified in a journalistic medium.
Several appropriate examples and analogies are used to illustrate the arguments. The great fog of London is given as an example of the deadly repercussions of air pollution in big cities. The statement that ozone might drift away from the city centers is substantiated by an anecdotal example by the author about peak levels near his home. Toxicity levels in household dust are compared to chemicals on a truck that must be labeled as hazardous waste if they were transported.
A sharp contrast between colloquial passages, including jargon and several exclamations, versus more professional, scientific and factual language is conspicuous. The author uses casual phrases like “thick as pea soup”, “nastier” and “in a […] village near my home!” which dominate the tone of the article. With the exception of the endless reproduction of dull facts and figures, bombarding the reader with a lot of raw information, and some exotic choice of words (“impetus”, “noxious” and “tetrachloroethylene”) the author is writing for a general audience without deep scientific knowledge. “Pollution”, “pollutant” and “polluted” are used extensively throughout the article but their use is varied enough to not become repetitive or annoying.
The majority of sentences is phrased in active voice. There are two acronyms, C4 and UK, that are never explained but can easily be deciphered from the context. No abbreviations were used.
Argumentation, References and Scientific Nature
Many statements are vague or unsupported, for example: “Total levels of the nastier types of vehicle emissions have decreased […] but are still too high for comfort”. The article’s topic is mainly discussed from the authors point of view, who also tries to appeal to the reader’s emotions by employing casual language and personal example. Opposing arguments are missing, making the discussion one-sided, but there is still critical evaluation of some facts, e.g. regarding asthma and air quality in different countries.
There is only one explicit reference. Another BBC show is named as source of the historical facts in the introduction, but probably as a means of cross-promotion and not for scientific reasons. Also, an unspecified study by the United States Environmental Protection Agency was utilized. Other sources are merely hinted at, but never accurately referenced, e.g. some unidentified ground-level ozone survey. Even the dreaded, nebulous phrase “recent studies suggest” is used in the article, together with other statements for which no source or supporting evidence is ever given. There are no quotes of other works in this article.
The verdict of this review could be summarized as follows: “Fighting for breath” is a typical news article about a scientific topic presented in an unscientific form in an attempt to appeal to the masses. Personally, I have read better popular science literature.
Regarding hypotheses and questions
- What phenomena or properties are being investigated?
The properties of a new graph-based document similarity measure are investigated.
- Why are they of interest?
This can be useful for fast document retrieval and recommendation.
- Has the aim of the research been articulated? What are the specific hypotheses and research questions? Are these elements convincingly connected to each other?
Yes, the paper clearly states these three hypotheses
- The new similarity measure provides a significantly higher correlation with human notions of document similarity than comparable measures.
- This also holds true for short documents with few annotations.
- Document similarity can be calculated efficiently compared to other graph-traversal based approaches.
These hypotheses focus on the effectiveness and efficiency of the proposed method and are therefore convincingly connected to each other.
- To what extent is the work innovative?
It introduces a new method of calculating document similarity for faster and better document retrieval and recommendation applications.
- Is this reflected in the claims?
Yes, see the hypotheses listed under question 3.
- What would disprove the hypothesis?
- A comparable similarity measure provides identical or higher correlation with human notions of document similarity than the proposed method.
- The proposed method also fails in the same way for short documents with few annotations.
- A graph-traversal based approach calculates Document similarity faster than the proposed method.
- What are the underlying assumptions? Are they sensible?
The underlying assumption is, that graph-based measures are not as efficient and that the proposed method is better than others in finding similar documents. Both of these assumptions are sensible in my eyes.
- Has the work been critically questioned? Have you satisfied yourself that it is sound science?
This is a tough question to answer. The authors probably questioned their work critically or might had feedback from their colleagues whether or not it might be sound science. But there could still be mistakes that have been overlooked. Only if their work will be widely accepted in the scientific community and stand the test of time can it be considered as sound science.
Regarding evidence and measurement
- What forms of evidence are to be used? If it is a model or a simulation, what demonstrates that the results have practical validity?
The evidence used to prove the hypotheses was an experiment with two datasets, a standard benchmark and another with short documents.
- How is the evidence to be measured? Are the chosen methods of measurement objective, appropriate, and reasonable?
They used standard metrics and benchmark data sets that were also used to evaluate the reference algorithms for comparison, making the chosen methods objective, appropriate and reasonable.
- What are the qualitative aims, and what makes the quantitative measures you have chosen appropriate to those aims?
The qualitative aim is the improvement of document similarity measures. This is tested by quantifying the speed and the correlation with human answers.
- What compromises or simplifications are inherent in your choice of measure?
The selection of test documents must be a representative sample of the entire population of documents in existence.
- Will the outcomes be predictive?
I lack the necessary amount of experience and understanding of this topic to make any reasonable predictions on the outcome of this research.
- What is the argument that will link the evidence to the hypothesis?
If there is no other measure currently available that outperforms the proposed method, then it can be assumed that the hypotheses are true.
- To what extent will positive results persuasively confirm the hypothesis?
Positive results will confirm the hypothesis to a limited extend since any improvement, even a marginal one, results in a positive outcome.
- Will negative results disprove it?
Yes, see question 6.
- What are the likely weaknesses of or limitations to your approach?
I am not proficient enough to identify any likely weaknesses or limitations that the authors of the paper may have overlooked.
A 3D scanner is a device that analyses a real-world object to collect data on its shape and its appearance. This data can then be used to construct digital three-dimensional models which can then be turned into physical models by 3D printing. 3D scanning humans and animals is a special application of this technology with many relevant use cases in science, sports and medicine; fashion, arts and entertainment.
Many different technologies can be used to build 3D scanners. Each comes with its own limitations, advantages and costs. For example, optical technologies encounter many difficulties with shiny, mirroring or transparent objects, like glasses and jewelry, and fuzzy surfaces, like fur or hair. These problems can be solved by applying suitable post processing algorithms and methods.
In this paper, a comprehensive overview of techniques related to the pipeline from 3D scanning to printing is provided. A comparison of the latest 3D sensors and 3D printers is drawn and several sensing, post processing, and printing techniques, available from both commercial deployments and published research, are introduced. Possible tradeoffs, current progress, future research trends, and potential risks of 3D technologies are also discussed.
(first draft, 189 words, written in 45 minutes)
- We live in the era of Big Data with storage and transmission capacity measured not just in terabytes but in petabytes (where peta- denotes a quadrillion or a thousand trillion). Data collection is constant and even insidious with every click and every “like” stored somewhere for something. This book reminds us that data is anything but “raw” – that we shouldn’t think of data as a natural resource but as a cultural one that needs to be generated, protected and interpreted. The book’s essays describe eight episodes in the history of data from the predigital to the digital. Together they address such issues as the ways that different kinds of data and different domains of inquiry are mutually defining how data are variously “cooked” in the processes of their collection and use and conflicts over what can or can’t be “reduced” to data. Contributors discuss the intellectual history of data as a concept, describe early financial modeling and some unusual sources for astronomical data, discover the prehistory of the database in newspaper clippings and index cards and consider contemporary “dataveillance” of our online habits as well as the complexity of scientific data curation.
- During succession, ecosystem development occurs, but in the long term absence of catastrophic disturbance a decline phase eventually follows. We studied six long term chronosequences in Australia, Sweden, Alaska, Hawaii and New Zealand. For each, the decline phase was associated with a reduction in tree basal area and an increase in the substrate nitrogen to phosphorus ratio indicating increasing phosphorus limitation over time. These changes were often associated with reductions in litter decomposition rates, phosphorus release from litter and biomass and activity of decomposer microbes. Our findings suggest that the maximal biomass phase reached during succession cannot be maintained in the long term absence of major disturbance and that similar patterns of decline occur in forested ecosystems spanning the tropical temperate and boreal zones.
- Facebook’s Graph API is an API for accessing objects and connections in Facebook’s social graph. To give some idea of the enormity of the social graph underlying Facebook, it was recently announced, that Facebook has 901 million users and the social graph consists of many types beyond just users. Until recently the Graph API provided data to applications in only a JSON format. In 2011 an effort was undertaken to provide the same data in a semantically enriched RDF format containing Linked Data URIs. This was achieved by implementing a flexible and robust translation of the JSON output to a Turtle output. This paper describes the associated design decisions the resulting Linked Data for objects in the social graph and known issues.
What would you do if you didn’t have to attend this class right now?
If I didn’t have to attend this class right now, I would probably be at home preparing for my escape. I would make plans and plot strategies. I would think about obstacles I might encounter. I have to be prepared for any eventuality, because time is ticking: I only have one hour to do it. I can only try it once and if I fail, I won’t get a second chance. I have to escape this prison!
You maybe think I am delusional and out of my mind. But what would you do if you were chained to the walls of a mad medical doctor’s laboratory? You would want to be prepared. Still don’t know what I am talking about? I am meeting some of my friends tonight to go to the escape room in Magdeburg.
An escape room is a physical adventure game which players are locked in a room and have to use elements of the room to solve a series of puzzles and escape within a set time limit. The games are set in a variety of fictional locations, such as prison cells, dungeons and space stations, and are popular as team building exercises. It is really fun. You should try it sometime if you haven’t had the chance yet.
I chose the topic of 3D scanning. I am personally most interested in full-body scans of people. The first two papers presented here give an overview of different techniques and implementations of such scanners. The other 3 papers are specific solutions to that problem. I want to compare different approaches regarding their performance, hardware requirements and costs.
 Longyu Zhang, Haiwei Dong, and Abdulmotaleb El Saddik. 2015. „From 3D Sensing to Printing: A Survey“. ACM Trans. Multimedia Comput. Commun. Appl. 12, 2, Article 27 (October 2015), 23 pages. DOI=http://dx.doi.org/10.1145/2818710
Three-dimensional (3D) sensing and printing technologies have reshaped our world in recent years. In this article, a comprehensive overview of techniques related to the pipeline from 3D sensing to printing is provided. We compare the latest 3D sensors and 3D printers and introduce several sensing, postprocessing, and printing techniques available from both commercial deployments and published research. In addition, we demonstrate several devices, software, and experimental results of our related projects to further elaborate details of this process. A case study is conducted to further illustrate the possible tradeoffs during the process of this pipeline. Current progress, future research trends, and potential risks of 3D technologies are also discussed.
 H.A.M. Daanen, F.B. Ter Haar, „3D whole body scanners revisited“, Displays, Volume 34, Issue 4, October 2013, Pages 270-275, ISSN 0141-9382, http://dx.doi.org/10.1016/j.displa.2013.08.011.
An overview of whole body scanners in 1998 (H.A.M. Daanen, G.J. Van De Water. Whole body scanners, Displays 19 (1998) 111–120) shortly after they emerged to the market revealed that the systems were bulky, slow, expensive and low in resolution. This update shows that new developments in sensing and processing technology, in particular in structured light scanners, have produced a new generation of easy to transport, fast, inexpensive, accurate and high resolution scanners. The systems are now moving to the consumer market with high impact for the garment industry. Since the internet sales of garments is rapidly increasing, information on body dimensions become essential to guarantee a good fit, and 3D scanners are expected to play a major role.
 Straub, J., & Kerlin, S. (2014). Development of a Large, Low-Cost, Instant 3D Scanner. Technologies, 2(2), 76–95. MDPI AG. http://dx.doi.org/10.3390/technologies2020076 (http://www.mdpi.com/2227-7080/2/2/76)
Three-dimensional scanning serves a large variety of uses. It can be utilized to generate objects for, after possible modification, 3D printing. It can facilitate reverse engineering, replication of artifacts to allow interaction without risking cultural heirlooms and the creation of replacement bespoke parts. The technology can also be used to capture imagery for creating holograms, it can support applications requiring human body imaging (e.g., medical, sports performance, garment creation, security) and it can be used to import real-world objects into computer games and other simulations. This paper presents the design of a 3D scanner that was designed and constructed at the University of North Dakota to create 3D models for printing and numerous other uses. It discusses multiple prospective uses for the unit and technology. It also provides an overview of future directions of the project, such as 3D video capture.
 CopyMe3D: Scanning and Printing Persons in 3D (J. Sturm, E. Bylow, F. Kahl, D. Cremers), In German Conference on Pattern Recognition (GCPR), 2013.
In this paper, we describe a novel approach to create 3D miniatures of persons using a Kinect sensor and a 3D color printer. To achieve this, we acquire color and depth images while the person is rotating on a swivel chair. We represent the model with a signed distance function which is updated and visualized as the images are captured for immediate feedback. Our approach automatically fills small holes that stem from self-occlusions. To optimize the model for 3D printing, we extract a watertight but hollow shell to minimize the production costs. In extensive experiments, we evaluate the quality of the obtained models as a function of the rotation speed, the non-rigid deformations of a person during recording, the camera pose, and the resulting self-occlusions. Finally, we present a large number of reconstructions and fabricated figures to demonstrate the validity of our approach.
 Weiss, A., Hirshberg, D., & Black, M. J. (2013). Home 3D Body Scans from a Single Kinect. In Consumer Depth Cameras for Computer Vision (pp. 99-117). Springer London.
The 3D shape of the human body is useful for applications in fitness, games, and apparel. Accurate body scanners, however, are expensive, limiting the availability of 3D body models. Although there has been a great deal of interest recently in the use of active depth sensing cameras, such as the Microsoft Kinect, for human pose tracking, little has been said about the related problem of human shape estimation. We present a method for human shape reconstruction from noisy monocular image and range data using a single inexpensive commodity sensor. The approach combines low-resolution image silhouettes with coarse range data to estimate a parametric model of the body. Accurate 3D shape estimates are obtained by combining multiple monocular views of a person moving in front of the sensor. To cope with varying body pose, we use a SCAPE body model which factors 3D body shape and pose variations. This enables the estimation of a single consistent shape, while allowing pose to vary. Additionally, we describe a novel method to minimize the distance between the projected 3D body contour and the image silhouette that uses analytic derivatives of the objective function. We use a simple method to estimate standard body measurements from the recovered SCAPE model and show that the accuracy of our method is competitive with commercial body scanning systems costing orders of magnitude more.
Additional paper in German:
Zagel, C., Süßmuth, J., & Bodendorf, F. (2013). Automatische Rekonstruktion eines 3D Körpermodells aus Kinect Sensordaten. In Wirtschaftsinformatik (p. 35).
The concept of warp-drive engines that allow faster-than-light space travel is well known to many fans of science fiction literature. But this technology might soon get one step closer to becoming science fact: Harold White, head of the advanced propulsion program at NASA’s Johnson Space Center in Houston, was granted $50,000 for his still controversial research on this topic. A relatively small bet by the underfunded agency, but one that could pay off in the future by enabling it to conduct its first interstellar travel.
White has assembled a tabletop experiment designed to create tiny distortions in spacetime. If his research proves successful, it may eventually lead to the development of a system that could generate a bubble of warped spacetime around a spacecraft. Instead of increasing the craft’s speed, the warp drive would distort the spacetime along its path, allowing it to sidestep the laws of physics that prohibit faster-than-light travel.
Such a spacecraft could cross the vast distances between stars in just a matter of weeks. Currently NASA’s Voyager 1, launched in 1977 to investigate Jupiter, Saturn and their moons, is the only man-made object that has ever left our solar system and entered interstellar space. It has traveled almost 12 billion miles since its launch and is now zooming away from us at 38,610 miles per hour. But even at that blistering speed it would take at least 70,000 years to reach any of the nearby stars that might harbor habitable planets: In recent years astronomers have detected a slew of Earthlike planets orbiting stars that are relatively near of our sun.
Advocates argue that exploring other star systems is essential to humanity’s long-term survival. As long as the human race is confined to Earth we’re at high risk of extinction from a planetary catastrophe—a nuclear war, a pandemic, an asteroid impact etc. The only other world in our solar system that comes even close to being habitable is Mars, and it would take hundreds of years of climate engineering to make the Red Planet livable for humans.
A surprising number of scientists, engineers and amateur space enthusiasts have shared their hopes and hypotheses for an interstellar future at academic conferences and founded organizations that seek to lay the groundwork for an unmanned interstellar mission that could be launched by the end of the century.
But many have focused their attention on technologies that are less hypothetical than a warp-drive: Some suggest the use of fusion power to propel a spacecraft thousands of times faster than Voyager 1. But the technology hasn’t proved itself on Earth yet, and it’s certainly not ready to be installed in a spacecraft.
Another big problem is interstellar dust. Even microscopic particles will cause plenty of damage to a probe travelling at such intense speed. The required shielding and necessary amount of energy for deceleration before reaching the destination will increase the amount of fuel that an interstellar space probe needs to carry.
The complications seem as endless as space itself. The tremendous difficulty of interstellar flight may help explain the famous paradox first noted by physicist Enrico Fermi in 1950: if intelligent life is common in the universe, where are all the aliens? Perhaps extraterrestrials have never visited Earth because it’s just too hard to get here.
1st abstract: A Survey of Enabling Technologies and Concepts for an Urban Internet of Things and Best Practice Solutions from a Smart City Pilot Project.
2nd abstract: A Black-Box String Test Case Generator with Improvements in String Diversity and String Length Distribution.
Possible topics for my student project:
- My favorite: Currently, I am very eager to expand my existing knowledge of 3D scanning technologies. This is a topic I would probably feel most comfortable writing about because I already know a lot about it. But it might not pose a very interesting challenge and could become a little boring, but I doubt this will really happen since this is my favorite topic at the moment and I consider this a serious candidate for my master thesis.
- The challenge: I have a very basic understanding of machine learning, but lack a deeper knowledge of neural networks and deep learning. I hope that writing about this topic will help me get better insight into this area of research and equip myself with new tools that could be handy in the future. But, like every challenge, this option carries a risk of failure. It would also be the most time consuming choice.
- The practical solution: Right now I am working on a project where medical ultrasound images are segmented and visualized. Choosing this topic would save me a lot of time, allowing me to better focus on both research and writing. But this topic is also the one I currently find the least appealing of these three and I am not sure if it really would be that beneficial to my ultrasound project if I choose it.