A short version …

Hi guys,

Unfortunately my bachelor thesis contains a non-disclosure agreement. So I am going to summarize another work I was involved in during my bachelor studies. The original document’s title is “Abschlussbericht Softwareprojekt StreakScore”. Nethertheless I will refer to my bachelor thesis in the second task.


The scientific group Multimedia and Security of the university Magdeburg developed a new way to measure the visibility of a finger print, the so called Streakiness Score.
Our work shows that the Streakiness Score is a valid metric that evaluates how good a finger print is viewable in an image. Therefore we compare this new metric with a generally accepted metric, the so called NFIQv1 (NIST Fingerprint Image Quality). NFIQv1 is the de facto standard to describe the quality of a fingerprint image.
For our comparison we used a set of 5600 images in different levels of resolution and quality. Our evaluation therefore used some simple statistical methods. We inspect noticeable differences in the results and draw conclusions about the characteristics on the fingerprint images that leads to this differences in both metrics. At the end we prove by specific manipulation of example fingerprint images that Streakiness Score performs in the most difficult cases better than the popular NFIQv1.


The main task of this software project is: “Verification of the ‘Streakiness Score’ with reference to its applicability for determining the visibility of a finger print.”.

Additionaly two specific tasks are given:

    • Analysis of the calculation of the Streakiness Score
    • Comparison of the distribution of values of Streakiness Score with the distribution of values of NFIQv1

Understanding the main goal needs a least a superficial knowledge of the relevant system into which the Streakiness Score should be integrated. The system is described in the research paper Visibility enhancement and validation of segmented latent fingerprints in
crime scene forensics
[1]. The authors evaluate the suitability of a special capture device – a chromatic white-light sensor – for an easier and mostly complete automatic way to take photos of fingerprints on crime scenes. In three major steps the fingerprints are detected and photographed:

  1. Complete scan of a big area in low resolution
  2. Locating fingerprints
  3. Rescan off fingerprints with high resolution

In the second step the Streakiness Score is highly involved, to get all locations for further usable fingerprints.


How does the Streakiness Score works?
To calculate the Streakiness Score – the value of a visibility of a fingerprint in an image – there exists two major steps. In the first step the given image is manipulated by some filters, so at the end there is improved version, which only consists of white papillary lines and black background.

Steps of image manipulation during calculation of Streakiness Score

Image manipulation steps of Streakiness Score

The next step calculates the value of visibility based on the amount of pixels representing the papillary lines. At the end there is a real number value between 0 (bad visibility) and 1 (very good visibility).

How does the NFIQv1 works?
The calculation of NFIQv1 is based on a neuronal network, which leads to integral value between 1 (very good quality) and 5 (very bad quality).

Steps of calculation of NFIQv1

Processing steps of NFIQv1

Our Realization

Our comparison is based on some free-to-use example image databases, but also some non-open research image databases. Our image pool consists of lateral and exemplary images. Lateral images contains some image noise, e.g. another fingerprint crossing the inspected one. The resolution of all images is between of 500ppi and 1000ppi.
We calculate our statistic values and put the results in diagrams. To investigate the impact of some image properties, we manipulate images with some GIMP [9] filters:

  • Gaussian with a kernel size of 5 and 10
  • Grid with line stroke of 1px and 6px
  • Erode filter
  • Dilate filter
  • Sharpen

Also we evaluate the impact of brightness and contrast of both the whole image and parts of the image to the two considered metrics.

Local contrast changes - (Fingerprint (PSF).png from wikipedia)

Local contrast changes – (Fingerprint (PSF).png from wikipedia)


We use three statistical methods for evaluating the calculated values: Average diagram, scatter plot and correlation coefficient. While the average diagram does not provide much useful visualization, the scatter plots allowed us to keep track of outliers. The correlation coefficient does not provide us with a good correlation numbers, as we expected.
The deeper analysis of outliers highlights the fact, that both metrics – Streakiness Score and NFIQv1 – works in a complete different way and therefore are affected by different characteristics.
Nonetheless the Streakiness Score outperforms the NFIQv1 – because it values a fingerprint much more like a human would do.

Summary and Outlook

Based on our research we get three conclusions:

  1. Streakiness Score does not correlate with NFIQv1
  2. Streakiness Score values poor quality fingerprint images low
  3. Streakiness Score values good quality fingerprint images high

Image manipulation does achieve the effect of the Streakiness Score in a manner that closely follows human intuition.
In total we are convinced that the Streakiness Score is a good metric to measure the quality of a fingerprint – at least it outperforms the generally accepted NFIQv1 in our given test databases.


[1] – A. Makrushin, T. Kiertscher, M. Hildebrandt, J. Dittmann, C. Vielhauer Visibility enhancement and validation of segmented latent fingerprints in
crime scene forensics
, Proc. SPIE 8665, 2013
[2] – http://www.gimp.org – date of last access: 13.04.2016, 01:00

What are the things I like during the writing process of my bachelor’s thesis?

I really had fun writing the introduction which allowed me to link some basic topics and related questions together. Maybe the *klarstellung wofür die arbeit gedacht ist war ein schönes gefühlr*. The whole introduction chapter was easy to write. Besides that I liked to simplifying explanations of algorithms, when their original description inside scientific paper contained too many unnecessary details for my work. This leads me to the hypothesis that I like to summarize things.

What where my difficulties?

Also when I liked to summarize procedures I don’t liked explaining things in all details. If everything should be very correct – which I am sure is the way scientific work works – many sentences contain very similar statements and differ only in specific details. To make such textual passages interesting and avoid repeating, so readers don’t fall asleep during reading was very difficult for me.
Finding the right positions to insert references was also hard to determine for me. In general I am sure I used too few references.

Am I satisfied with the result?

I am definitly not completely satisfied with my result. Main cause is, that my time management was very bad. The good writing progress of the first chapter leads me to underestimate the time requirement for writing the whole bachelor’s thesis. Especially the last chapters was not as mature as the first ones. Maybe with some more time I would have written better last chapters.