Homework: Algorithms

Choose a simple algorithm and a standard description of it, such as linear search in a sorted array. Rewrite the algorithm in prosecode. Repeat the exercise with a more interesting algorithm, such as heapsort. Now choose an algorithm with an asymptotic cost analysis. Rewrite the algorithm as literate code, incorporating the important elements of the analysis into the algorithm’s description.

Linear Search as Prosecode
LinearSearch(x,l) searches for the position of the item x in a ordered list l. For this purpose, every item i in the list is compared with x. If an item equals x, its position is returned. Otherwise the algorithm returns an invalid position, when the end of the list is reached.

1. (Initialize) Set the starting position p <– 0.
2. (Search) For each position p in l:
(a) If i at position p equals the searched element x, return p and stop the search.
(b) Otherwise continue with the next position.
3. (Return) If x was not found in the list, return an invalid position Ω.

Heapsort as Prosecode
HeapSort(l) sorts the elements in a list l.
At first, the list is turned into a max heap. A max-heap is basically a complete binary tree with the additional property that each child node are smaller or equal to their parent nodes. This means that the largest value is always at the root node of the binary tree.
A sorted array is created by repeatedly removing the value from the root of the heap and inserting it into the array. After the removal of the root, the heap needs to be updated.

1. (Initialize) Turn the list l into a max-heap.
2. (Sort) The largest value of the heap is copied to a sorted array. Afterwards, the heap has to be updated so that the next largest value becomes the next root. This is repeated until the heap is empty
(a) Take the root r and add it to the array a
(b) Update the remaining heap
3. (Return) If the heap is empty, return a.

BogoSort as Literate Code
BogoSort(l) sorts the elements of a list l.

The major steps of the algorithm are as follows:
1. Generate a permutation of the items of the list.
2. Check if the elements in the permutation are ordered.
3. Return the list or repeat.

We now examine these steps in detail.
1. Generate a permutation of the items of the list.
Basically, we want to shuffle the order of the items in the list at random.
This step is really important as it allows us to reach a performance of O(n) in the best case scenario. The best case scenario means that the first permutation we chose was the right one.

2. Check if the elements in the permutation are ordered.
Each item i in the list l is compared to the next item n. If i is smaller than n in all cases, the list is sorted.

3. Return the list or repeat.
If the second step yields a positive result, the sorted list l is returned. Otherwise, we start again at the first step.

Cost analysis
As mentioned above, this algorithm has a great best case performance of O(n). However, the average case performance is not as good, because the algorithm is often repeated many times. Therefore, its average case performance is O((n+1)!). The worst case, however, means that no solution can be found. Of course, this should only be the case if bad random number generators are used.


Nearly forgot about this homework.

Another Author Post

This time I won’t focus on one because I have three authors left that I definitely want to recommend. One of them is another guy, that just posts good stuff in the internet and the other two could be called classics.

Let’s start with the internet guy. His nickname is qntm but n the comments he is mostly called Sam. He published a lot of short stories on his page, along with a few longer ones. His stories range from Sci-Fi and Time Travel to Fantasy and Magic and are sometimes strange and weird, but in a good way. His stories are based around laws of physics. Sometimes he invents own systems in which he tries to write a coherent and logical story. Both Ra and Fine Structure, the longest stories Sam has ever written, are like that.
Ra is a strange mixture between Magic and Sci-Fi. Magic was discovered in the 1970s and has developed into a new field of engineering. It is taught at universities and is researched using the scientific method. The protagonist is a promising student of applied magic and has the goal of reaching space without the help of any machines. The story can be found here.

But maybe you want to try a few short stories first? How about sam513, or The Last-But-One Question (weird and cool) or I don’t know, Timmy, being God is a big responsibility (about Hyper-Computation, therefore weird)? How about a medium sized one like The Four-And-A-Halfth Planet (Time Travel) instead? Really, you should check some of those shorter stories out, they are great! 🙂

On to the classics:

H.P. Lovecraft mostly fabricated a special kind of horror. Not the a-axe-murdering-guy-is-hunting-you-kind, but horror on a far larger scale albeit somewhat concealed. A prime example would be The Call of Cthulhu where the protagonist slowly uncovers knowledge surrounding mighty beings through books, notes and studying strange cults around the world. Those beings are called the Old Ones and pose a great threat to mankind, should they ever awaken.
The Music of Eric Zann is a bit shorter, but goes down a similar route. A bit more insanity and less Old Ones in this one, I think.
All of the works by Lovecraft I already read are very dark. I really like his language. His stories give me a creepy feeling like there is something invisible and dangerous looming right next to me. I love that!

I only read two books (1984 and Animal Farm) by Orwell, but those were absolutely great. Both touch on political topics, which I really like.
1984 describes the most depressing dystopia I ever had the displeasure to think about. But as you are already aware I am drawn to dark and dismal stories, so that’s right up my alley.

I think this will be the last post about authors that I write, but I think I covered the most important ones and more or less in the right order. The only other one that I should have written about is Lewis Carroll. You should check out Alice’s Adventures in Wonderland if you did not read it, yet. 🙂

Analysis of “Fighting for breath”

Read the text „Fighting for breath“ in Tim Skerns „Writing Scientific English“ (p. 50f) and analyze its adherence to the aspects of good style.

The text “Fighting for breath” was written by Dr. Mark Porter and published in the BBC Radio Times in the year 1999. The author discusses the influence of air pollution on health problems.

We learned about good style with guidelines what were sometimes specific to scientific writing. While the given text seems to be a newspaper article, many of those rules apply here, too.
Still, I left out some negative criticism. For instance, I would point out the lack of citations in a scientific paper, but this does not seem appropriate in this case.

There is not much to criticize about this text but some minor flaws. For example, the title does not clearly state the topic and tries to be catchy. Furthermore, there are some parts of sentences that could be removed.
Additionally some words in the text were in italics, which was irritating, because it placed emphasis on words that didn’t need it. However, I am not certain if this use of cursive was intended by the original author or added later by Tim Skern.

The text showcases a lot of properties of good style. In my opinion the variety of sentence and paragraph length as well as of the sentence structure was appropriate. The active voice and the correct tenses were employed. Moreover, jargon was not used.

All in all, the author adheres to the aspects of good style.

Writing prompt #7

Use the following words in your story:
Kittens, a half-eaten pizza, telepathy, magic, a radio call-in show, salt crackers, a sixpack of beer, existential angst

What a great party that was, yesterday. I met so many strange and interesting characters!

Man, there was that one guy, who believed in magic. He really tried to use telepathy on my kitten plush toy. He may have had a beer too much. Or ten.

Then there was this woman who works as a moderator at a late night radio call-in show. She constantly has to deal with people with existential angst, weird fetishes, paranoia or a combination of those things. It sounded pretty interesting, but I don’t envy her.

Time for a left-over breakfast. Salt crackers, a half-eaten pizza and a six-pack of beer are a good meal to start the day, right? Nah, not really, but I don’t have anything else in the house.

Zobel’s Checklist

Regarding hypotheses  and questions
What phenomena or properties are being investigated? Why are they of interest?

The paper proposes a new approach to the search for related and similar documents. This can be useful for document retrieval and recommendation.
Their approach is graph-based, which usually goes hand in hand with expensive graph-operations. The authors claim to circumvent this by applying the needed knowledge from the graph to the documents during a pre-processing step

Has the aim of the research been articulated? What are the specific hypotheses and research questions? Are these elements convincingly connected to each other?

The authors want to show three things:
Their approach provides higher correlation with human notions of document similarity than comparable measures.
This holds true for short documents with few annotations.
The calculation of document similarity is more efficient, compared to graph-traversal-based approaches.

To what extent is the work innovative? Is this reflected in the claims?

The work provides a new and better approach to the assessment of relatedness of documents.
Yes, this is reflected in the claims

What would disprove the hypothesis? Does it have any improbable consequences?

Experimental results that show lower correlation with human notions of document similarity than comparable measures for normal or short document lengths or a less efficient calculation of document similarity, compared to graph-traversal-based approaches would disprove their hypotheses.

What are the underlying assumptions? Are they sensible?

They assume, that graph-based methods are not fast enough and that their approach is better than other approaches in finding similar documents. Both assumptions are sensible

Has the work been critically questioned? Have you satisfied yourself that it is
sound science?

I think so.
Regarding evidence  and measurement
What forms of evidence are to be used? If it is a model or a simulation, what
demonstrates that the results have practical validity?

An experiment using the standard benchmark for multiple sentence document similarity.

How is the evidence to be measured? Are the chosen methods of measurement objective, appropriate, and reasonable?

They use the Pearson and Spearman correlation and their harmonic mean as well as a quality ranking using “Normalized Discounted Cumulative Gain”.
The authors state that those metrics are used in related work as well. I can’t say for sure, if those metrics are objective and appropriate, but they allow comparison to other work, which seems reasonable.

What are the qualitative aims, and what makes the quantitative measures you have chosen appropriate to those aims?

I can’t say if they are appropriate, because I don’t know anything about those measures.

What compromises or simplifications are inherent in your choice of measure?

Because they only want to measure how well relevant documents are discovered, the qualification evaluation is only used on the top elements their approach turns up.
Additionally, they use a benchmark, which may or may not replicate real data.

Will the outcomes be predictive?

I’m not really sure what this question aims at.

What is the argument that will link the evidence to the hypothesis?

Standard measures and standard benchmarks ensure that the results can at least partially confirm or disprove the hypotheses.

To what extent will positive results persuasively confirm the hypothesis? Will negative results disprove it?

As only a benchmark and not real data is used in the experiment, the results cannot totally prove or disprove the hypotheses. Nevertheless, the results can give an indication towards one or the other.

What are the likely weaknesses of or limitations to your approach?

The pre-processing step has to be done, which probably needs time.

Writing prompt #6

Imagine you are someone’s shadow for a day.

A friend of mine told me, that her boyfriend was acting weird, lately. She asked me to keep him under surveillance for today.
Great, no I have to follow him without being seen. Why didn‘t I talk her out of this idea? Her boyfriend is a good guy, I‘m sure he isn‘t up to something bad.
This whole situation is ridiculous.
He seems to have arrived at his destination. A jewelry store. Huh. He is examining expensive looking rings.

Shit, he is coming out of the store! I‘ll just turn away, look at the exhibits in the window and hope he doesn‘t recognize me.
Phew, it worked somehow. That was close.
Where is he heading now? Seems to be the same route he took here. Is he just going home?

Oh god. He is just going to propose to her, isn‘t he? He told me a few months ago that he plans to do that soonish.
Well, fuck. What to I tell her? I can‘t spoil the surprise!
I should have just talked her out of that stupid idea.


I’m not really happy with the ending and a few other bits. It’s still far better than what I wrote for prompt #5, which I’m probably not going to upload.


The 90s of the last century were marked by the invention of quantum algorithms, like the one by P. W. Shor, which is used for factorization of integers, or the algorithm for fast database searches by L. K. Grover. Those algorithms can be used to crack widespread cryptographic systems like RSA, which relies on the difficulty of the factorization of big numbers.

If classic cryptographic systems are cracked, all past and future communication using those systems is compromised. Quantum encryption solves this problem, as it can only be successfully attacked in real time. For its implementation, quantum key distribution systems are needed.

This paper surveys the state of the art of quantum key distribution systems as well as their properties.


Can I use acronyms like RSA? It stands for “Rivest, Shamir und Adleman”, which sounds kind of silly in the middle of a sentence.

Author: Elizier Yudkowsky (Less Wrong)

A few years ago, friends of mine suggested „Harry Potter and the Methods of Rationality“ (HPMOR) by Elizier Yudkowski (Less Wrong) to me.

This is a story set in the world of Harry Potter, but with a few major changes. Harry’s aunt did not marry Vernon Dursley, but instead Michael Verres-Evans, a professor at Oxford. Additionally, Harry is highly intelligent and learned about the scientific method early in his life.
In Hogwarts, he is, together with Hermine, assigned to Ravenclaw. As soon as he hears anything about Quidditich, he points out how strange and even silly some rules of that game are. He doesn’t get along with Ron, but instead with Draco Malfoy.
Soon Harry takes on two quests: To teach Malfoy about science as well as rational thinking and to discover how magic works on a physical level.

This is a really fascinating and exciting read with clever dialogues and interesting facts about physics, mathematics and psychology (a lot about different biases and other flaws in human thinking).

There is another work by Less Wrong, but I didn’t get around to reading it, yet. It is called “Rationality – From AI to Zombies”. It started out as a series of blog posts, called “sequences” about different topics of rational thinking. The few pages I already read made a great impression. I am looking forward to read the rest of it. 🙂