Hands-On Image Processing with Python
上QQ阅读APP看书,第一时间看更新

Down-sampling

In order to decrease the size of an image, we need to down-sample the image. For each pixel in the new smaller image, there will be multiple pixels in the original larger image. We can compute the value of a pixel in the new image by doing the following:

  • Dropping some pixels (for example, dropping every other row and column if we want an image a fourth of the size of the original image) from the larger image in a systematic way
  • Computing the new pixel value as an aggregate value of the corresponding multiple pixels in the original image

Let's use the tajmahal.jpg image and resize it to an output image of a size 25 times smaller than the input image using the resize() function, again from the PIL library:

im = Image.open("../images/tajmahal.jpg")
im.show()

Reduce the width and height of the input image by a factor of five (that is, reduce the size of the image 25 times) simply by choosing every row in five rows and every column in five columns from the input image:

im = im.resize((im.width//5, im.height//5))
pylab.figure(figsize=(15,10)), pylab.imshow(im), pylab.show()

Here's the output:

As you can see, it contains some black patches/artifacts and patterns that were not present in the original image—this effect is called aliasing.

Aliasing happens typically because the sampling rate is lower (we had too few pixels!) than the Nyquist rate (so one way to avoid aliasing is to increase the sampling rate above the Nyquist rate, but what if we want an output image of a smaller size?).