Recap session 5 – Scientific English

Here’s a quick recap of yesterday’s session. First of all, we tried to clarify some open questions regarding the student project. I updated the project page accordingly.

If you haven’t noticed yet, I also linked an example paper there to give you some idea about the structure of such a paper. Nevertheless, the example paper is from the bachelor’s course and has a smaller scope. Your paper should have  a larger scope!

If  you like to have early feedback or reassurance that you are on the right track, feel free to send us (that is your tutor and/or me) your zeroth or first draft.

Yesterday’s  writing prompt was “What would you do now if you wouldn’t have to attend this class?” Again thanks to the brave among you who read their little stories aloud 🙂

We discussed aspects from the introduction in Skern’s “Writing Scientific English” and I showed you some slides and a nice comic to sum things up.

Did you know that Shakespeare contributed fundamentally to the evolution of the English lanuage? There is a nice article at Grammarly about this.

For the remainder of the session, we read a paper and analyzed it for characteristics of good scientific English (No link to the paper since it appeared in a bogus journal).



  1. (Re-)Read the sections in Zobel’s chapter 3 on reviewing (p. 30ff) and chapter 13 on “Editing”. We will discuss these chapters in class next time.
  2. Punctuation Game – Put in the missing punctuation marks (, ; -)
    • We live in the era of Big Data with storage and transmission capacity measured not just in terabytes but in petabytes (where peta- denotes a quadrillion or a thousand trillion). Data collection is constant and even insidious with every click and every “like” stored somewhere for something. This book reminds us that data is anything but “raw” that we shouldn’t think of data as a natural resource but as a cultural one that needs to be generated protected and interpreted. The book’s essays describe eight episodes in the history of data from the predigital to the digital. Together they address such issues as the ways that different kinds of data and different domains of inquiry are mutually defining how data are variously “cooked” in the processes of their collection and use and conflicts over what can or can’t be “reduced” to data. Contributors discuss the intellectual history of data as a concept describe early financial modeling and some unusual sources for astronomical data discover the prehistory of the database in newspaper clippings and index cards and consider contemporary “dataveillance” of our online habits as well as the complexity of scientific data curation.
    • During succession ecosystem development occurs but in the long term absence of catastrophic disturbance a decline phase eventually follows. We studied six long term chronosequences in Australia Sweden Alaska Hawaii and New Zealand for each the decline phase was associated with a reduction in tree basal area and an increase in the substrate nitrogen to phosphorus ratio indicating increasing phosphorus limitation over time. These changes were often associated with reductions in litter decomposition rates phosphorus release from litter and biomass and activity of decomposer microbes. Our findings suggest that the maximal biomass phase reached during succession cannot be maintained in the long term absence of major disturbance and that similar patterns of decline occur in forested ecosystems spanning the tropical temperate and boreal zones.
    • Facebook’s Graph API is an API for accessing objects and connections in Facebook’s social graph. To give some idea of the enormity of the social graph underlying Facebook it was recently announced that Facebook has 901 million users and the social graph consists of many types beyond just users. Until recently the Graph API provided data to applications in only a JSON format. In 2011 an effort was undertaken to provide the same data in a semantically enriched RDF format containing Linked Data URIs. This was achieved by implementing a flexible and robust translation of the JSON output to a Turtle output. This paper describes the associated design decisions the resulting Linked Data for objects in the social graph and known issues.