Contrast is one of the most important properties of an image and contrast adjustment is one of the easiest things we can do to make our images look better. There are many ways to adjust the contrast and with most of them we have to be careful because they can artificially change our images. The title image shows three basic methods of contrast adjustment and how they affect the image histogram. We will cover all three methods here but first let us consider what it means for an image to have low contrast.
Each pixel in an image has an intensity value. In an 8-bit image the values can be between 0 and 255. Contrast issues occur, when the largest intensity value in our image is smaller than 255 or the smallest value is larger than 0. The reason is that we are not using the entire range of possible values, making the image overall darker than it could be. So what about the contrast of our example image? We can use
.max() to find maximum and minimum intensity.
import numpy as np import matplotlib.pyplot as plt from skimage import io, exposure, data image = io.imread("example_image.tif") image.max() # 52 image.min() # 1
Our maximum of 52 is very far away from 255, which explains why our image is so dark. On the minimum we are almost perfect. To correct the contrast we can user the
exposure module which gives us the function
image_minmax_scaled = exposure.rescale_intensity(image) image_minmax_scaled.max() # 255 image_minmax_scaled.min() # 0
Now both the minimum and the maximum are optimized. All pixels that were equal to the original minimum are now 0 and all pixels equal to the maximum are now 255. But what happens to the values in the middle? Let’s look at a smaller example.
arr = np.array([2, 40, 100, 205, 250], dtype=np.uint8) arr_rescaled = exposure.rescale_intensity(arr) # array([ 0, 39, 100, 208, 255], dtype=uint8)
As expected, the minimum 2 became 0 and the maximum 250 became 255. In the middle, 40 became smaller, nothing happened to 100 and 205 became larger. We will look at each step to find out how we got there.
arr = np.array([2, 40, 100, 205, 250], dtype=np.uint8) min, max = arr.min(), arr.max() arr_subtracted = arr - min # Subtract the minimum # array([ 0, 38, 98, 203, 248], dtype=uint8) arr_divided = arr_subtracted / (max - min) # Divide by new max # array([0. , 0.15322581, 0.39516129, 0.81854839, 1. ]) arr_multiplied = arr_divided * 255 # Multiply by dtype max # array([ 0. , 39.07258065, 100.76612903, 208.72983871, # 255. ]) # Convert dtype to original uint8 arr_rescaled = np.asarray(arr_multiplied, dtype=arr.dtype) # array([ 0, 39, 100, 208, 255], dtype=uint8)
We can get there in four simple steps. Subtract the minimum, divide by the maximum of the new subtracted array, multiply by the maximum value of the data type and finally convert back to the original data type. This works well if we want to rescale by minimum and maximum of the image but sometimes we need to use different values. Especially the maximum can be easily dominated by noise. For all we know, it could be just one pixel that is not necessarily representative for the entire image. As you can see from the histogram in the title image, very few pixels are near the maximum. This means we can use percentile rescaling with little information loss.
percentiles = np.percentile(image, (0.5, 99.5)) # array([ 1., 28.]) scaled = exposure.rescale_intensity(image, in_range=tuple(percentiles))
This time we scale from 1 to 28, which means that all values above or equal to 28 become the new maximum 255. As we chose the 99.5 percentile, this affects roughly 5% of the upper pixels. You can see the consequence in the image on the right. The image becomes brighter but it also means that we lose information. Pixels that were distinguishable now look exactly the same, because they are all 255. It is up to you if you can afford to lose those features. If you do quantitative image analysis you should perform rescaling with caution and always look out for information loss.