Practice English Speaking&Listening with: Lecture - 20 Image Enhancement

Normal
(0)
Difficulty: 0

Hello, welcome to the video lecture series on digital image processing. For last few

classes, we were discussing about image enhancement techniques and we have completed our discussion

on point processing techniques for image enhancement.

So, what we have done till now is the point processing techniques for image enhancement

and under this, the first operation that we have done is image negatives and there we

have seen that image negative operation is useful incase the image contains information

which are either contained in the gray level or in the white pixels that are embedded in

dark image regions.

So, if you take the negative of such images, then the information content becomes dark

where as background becomes white and visualization of that information in those negative images

is much more easier.

The second operation that we have done is the logarithmic transformation for dynamic

range compression. We have done this logarithmic transformation because we have seen that in

some cases, the dynamic range that is the difference between the minimum intensity value

and the maximum intensity value of an image is so high that a display device is normally

not capable to deal with such a high dynamic range.

So, for such cases, we have to reduce the dynamic range of the image so that the image

can be displayed properly on the given display device and this logarithmic transformation

operation gives such a dynamic range compression operation.

Then the next technique we have talked about is the power-law transformation. In case of

power-law transformation, we have seen that many of the devices like whether the image

printing device or the image display device or the image acquisition device; these different

devices, they themselves introduce some sort of power-law operation on the image that is

to be displayed.

As a result, the image that we want to display or the image that we want to print, they become

distorted. The appearance of the output image is not same as the image that is to be outputted

that is intended to be outputted. So, this power-law which is introduced by those devices

has to be corrected by some power-law operation.

So, we have seen that in case of this power-law compensation or power-law operation, we introduced

a pre processed image having a power-law which is inverse of the power-law that is introduced

by the device and as a result; the pre process image, when it goes to the device, then the

output of the device will be almost similar to the image that is intended.

So, for this kind of operation, to nullify the effect of the device, we go for power-law

compensation technique. Then the next operation that we have done is contrast stretching.

So, in case of contrast stretching, we have seen that in many cases, we can get a very

very dark image because of the fact that may be the scene when the image was taken was

not properly illuminated or the scene was very very poorly illuminated.

The other reason why we can get such a dark image is that while taking the photograph,

the camera lens was not properly set that is the aperture of the lens was not properly

set or we can also get dark image because of the limitation of the sensor itself. The

image sensor, if the dynamic range of the image sensor is a very narrow; in that case,

such a kind of sensor also leads to an image which is a dark image.

So, to enhance such dark images so that the image can be visualized properly, we go for

the contrast enhancement technique. The other kind of image enhancement we have talked about

is the grey level slicing operation and this kind of grey level operation is useful in

cases if the application needs to highlight certain range of grey levels in the image.

So, in such cases, in grey level slicing, we have seen 2 kind of techniques ((05:40))…

region is highlighted whereas the grey levels outside that particular specified range is

suppressed and the second kind of grey level slicing operation that we have seen is the

grey levels within the specified range is highlighted whereas the grey levels outside

the range remains as it is.

So, these are the 2 different types of grey level slicing operations we have talked about

and as I said that if the application demands that the application needs enhancement of

certain range of grey levels, the application is not interested in other grey level values

or other intensity values; in that case, what we go for is the grey level slicing kind of

operation. Then we have talked about other enhancement

techniques where these enhancement techniques are based on histogram based processing operations.

In other point processing techniques, we define a transformation function where the transformation

function simply works on a particular pixel of the input image to generate a processed

pixel of the output image and those transformation functions, they do not take care of or they

do not consider the overall appearance of the image and we have seen that the overall

appearance of the image is actually reflected in what is called the histogram of the image.

So, this histogram based processing techniques, they try to modify or they try to highlight

the overall appearance of the image by modifying the histogram of that particular image and

under this category, we have talked about 2 kinds of histogram based processing techniques;

one was one of them was histogram equalization technique and the second one was histogram

modification technique.

Then we have talked about 2 other kinds of image enhancement operations where these enhancement

operations does not perform on a single image but it performs on multiple number of images.

So, one of them we have talked about was image differencing operation. So, this image differencing

operation, it highlights those regions in the image where there is a difference between

the given 2 images. So, only those regions where the given 2 images are different, those

regions will be highlighted and where the 2 images are similar, those regions will be

suppressed.

The other kind of operation that we have done was image averaging operation and we have

said that this kind of image averaging operation is very very useful where the object which

is imaged that is of very very low intensity. So, for such kind of objects or while imaging

such kind of objects, the image that you get is likely to be dominated by noise.

So, if I get multiple number of frames of such noisy images and if the noise that is

added to the image is a 0 mean noise, then taking average of multiple number of frames

of such noisy images is likely to cancel the noise part and ultimately what comes out after

the averaging operation is the actual image that is desired. So, these are the different

kind of operations, point processing techniques for image enhancement that we have done till

our last class.

Now, in todays class we will talk about another special domain technique which is

called mask processing technique. The previous lectures also we were dealing with the special

domain techniques and we have said that image enhancement techniques can broadly be categories

into special domain techniques and frequency domain techniques. The frequency domain techniques

we will talk about later on.

So, in todays class, we will talk about another special domain techniques which are

known as mask processing techniques and other under this, we will discuss 3 different types

of operations. The first one is the linear smoothing operation; the second one is a nonlinear

operation which is based on the statistical features of the image which is known as the

median filtering operation and the third kind of mask processing technique that we will

talk about is the sharpening filter. Now, let us see what this mask processing

technique means.

Now, in our earlier discussions we have mentioned that while going for this contrast enhancement,

what we basically do is given an input image say f (x, y), we transform this input image

by a transformation operator say T which gives us an output image g (x, y) and what will

be the nature of this output image g (x, y) that depends upon what is this transformation

operator T.

In point processing technique, we have said that this transformation operation T that

operates on a single pixel in the image. That is it operates on a single pixel intensity

value. But as we earlier said that T is an operator which operates on a neighborhood

of the pixels at location (x, y); so for point processing operation, the neighborhood size

was 1 by 1. So, if we consider a neighborhood of size more than 1 that is we can consider

a neighborhood of size say 3 by 3, we may consider a neighborhood of size say 5 by 5,

we may consider a neighborhood of size 7 by 7 and so on.

So, if we consider a neighborhood of size more than 1, then the kind of operation that

we are going to have that is known as mask processing operation. So, let us see what

does this mask processing operation actually mean.

Here, we have shown a 3 by 3 neighborhood around a pixel location (x, y). So, this outer

rectangle represents a particular image and in the middle of this, we have shown a 3 by

3 neighborhood and this 3 by 3 neighborhood is taken around a pixel at location (x, y).

By mask processing what we mean is; so if I consider a neighborhood of size 3 by 3,

I also consider a mask of size 3 by 3. So, we find that here on the right hand side,

we have shown a mask. So, this is a given mask of size 3 by 3 and these different elements

in the mask that is W minus 1, minus 1 W minus 1, 0 W minus 1, 1 W 0, minus 1 and so on upto

W 1, 1; these elements represent the coefficients of this mask.

So, for all these mask processing techniques what we do is we place this mask on this image

where the mask center coincides with the pixel location (x, y). Once you place this mask

on this particular image, then you multiply every coefficient of this mask by the corresponding

pixel on the image and then you take the sum of all these products.

So, the sum of all these products is given by this particular expression and whatever

sum you get that is placed at location (x, y) in the image g(x, y). So, by mask processing

operation, this is the mathematical expression we get that g (x, y) equal to W ij into f

(x plus I, y plus j). You have to take the summation of this product over j varying from

minus 1 to 1 and i varying from minus 1 to 1.

So, this is the operation that has to be done for a 3 by 3 neighborhood in which case we

get a mask again of size 3 by 3. Of course, as we said that we can have masks of higher

dimension; we can have a mask of 5 by 5, if I consider a 5 by 5 neighborhood. I have to

consider a mask of size 7 by 7, if I consider a 7 by 7 neighborhood and so on.

So, if this particular operation is done at every pixel location (x, y) in the image,

then the output image g (x, y) for various values of x and y that we get is the processed

image g. So, this is what we mean by mask processing operation.

Now, the first of the mask processing operation that we will consider is the image averaging

or image smoothing operation. So, image smoothing is a special filtering operation where the

value at a particular location (x, y) in the processed image is the average of all the

pixel values in the neighborhood of x and y. So, because it is the average, this is

also known as averaging filter and later on we will see that this averaging filter is

nothing but a low pass filter. So, when we have such averaging filter, the corresponding

mask can be represented in this form.

So, again here we are showing a mask, a 3 by 3 mask and here we find that all the coefficients

in this 3 by 3 mask are equal to 1 and by going back to our mathematical expression,

I get an expression of this form that is g (x, y) equal to 1 upon 9 into f (x plus I,

y plus j). Take the summation over j equal to minus 1 to 1 and i equal to minus 1 to

1.

So naturally, as this expression says you find that what we are doing; we are taking

the summation of all the pixels in the 3 by 3 neighborhood of the pixel location (x, y)

and then dividing these summation by 9. So, which is nothing but average of all the pixel

values in the neighborhood of (x, y) in the 3 by 3 neighborhood of (x, y) including the

pixel at location (x, y) and this average is placed at location (x, y) in the processed

image g.

So, this is what is known as averaging filter and also this is called a smoothing filter

and the filter could and the particular mask for which all the filter coefficients or mask

coefficients are same or equal to 1 in this particular case, this is known as a box filter.

So, this particular filtering operation that we are getting, this particular mask is known

as a box filter.

Now, when we perform this kind of operation, then naturally because we are going for averaging

of all the pixels in the neighborhood; so the output image is likely to be a smoothed

image that means it will have a blurring effect, all the sharp transitions in the images will

be removed and they will be replaced by a blurred image.

As a result, if there is any sharp edge in the image; the sharp images, the sharp edges

will also be blurred. So, to avoid the effect of blurring, there is another kind of mask;

averaging mask or smoothing mask which performs weighted average.

So, such a kind of mask is given by this particular mask. So, here you find that in this mask,

the center coefficient is equal to 4. The coefficients vertically up, vertically down

or horizontally left, horizontally right; they are equal to 2 and all the diagonal elements

of the center elements in this mask are equal to 1. So effectively, what we are doing is

when we are taking the average, we are weighting every pixel in the neighborhood by the corresponding

coefficients and what we get is a weighted average.

So, the center pixel that is the pixel at location (x, y) gets the maximum weightage

and as you move away from the pixel locations, from the center location, the weightage of

the pixels are reduced. So, when we apply this kind of mask, then our general expression

of this mask operation that is valid which becomes W ij f (x minus i y minus f (x plus

I, y plus j). Take the summation from j equal to minus 1 to 1 and i equal to minus 1 to

1 and take 1 upon 16 of this and that will give the value which is to be placed at location

(x, y) in the processed image g. So, this becomes the expression of g(x, y).

Now, the purpose of going for this kind of weighed averaging is that because here we

are weighting the different pixels in the image for taking the average, the blurring

effect will be reduced in this particular case. So in case of box filter, the image

will be very very blurred and of course the blurring will be more if I go for bigger and

bigger neighborhood size or bigger and bigger mask size. When we go for averaging, weighted

averaging; in such cases, the blurring effect will be reduced. Now, let us say what kind

of result we get.

So, this gives the general expression that when we will consider W ij, we have to have

a normalization factor that is this summation has to be divided by sum of the coefficients

and as we said that 3 by 3 neighborhood is only a special case, I can have neighborhoods

of other sizes; so here it shows that we can have a neighborhood of size M by N where M

equal to 2a plus 1 and N equal to 2b plus 1 where a and b are some positive integers

and obviously here, you show the here it is shown that the mask is usually of odd dimension,

it is not even dimension and that is normally the mask of odd dimension which is normally

used in case of image processing.

Now, using this kind of mask operation, here we have shown some results. You find that

the top left image is noise image. When you do the masking operation or averaging operation

on this noisy image, the right top image shows the averaging with a mask of size 3 by 3,

the left bottom image is obtained using a mask of size 5 by 5 and the right bottom image

is obtained using a mask of size 7 by 7.

So, from these images, it is quite obvious that as I increase the size of the mask, the

blurring effect becomes more and more. So, we find that the right bottom image which

is obtained by a mask of size 7 by 7 is much more blurred compared to the other 2 images

and this effect is more prominent if you look at the edge regions of this images.

Say, if I compare this particular region with the similar region in the upper image or the

similar regions in the original image. You find that here, in the original image that

is very very sharp whereas when I do the smoothing using a 7 by 7 mask, it becomes very blurred

whereas the blurring effect when I use the 3 by 3 mask is much less. Similar such result

is obtained with other images also.

So, here is another image. Again, we do the masking operation or the smoothing operation

with different mask sizes. On the top left, we have an original noisy image and the other

images are the smoothed images using various mask sizes. So, on the right top, this is

obtained using a mask of size 3 by 3, the left bottom is an image obtained using a mask

of size 5 by 5 and the right bottom is an image obtained using a mask of size 7 by 7.

So, we find that as we increase the mask size, the reduction is in noise or the noise is

reduced to a greater extend but at the cost of addition of blurring effect. So, though

the noise is reduced but image becomes very blurred. So, that is the effect of using the

box filters or the smoothing filters that though the noise will be removed but the images

will be blurred or the sharp contrast in that image will be reduced.

So, there is a second kind of image, second kind of masking operations which are based

on order statistics which will reduce this kind of blurring effects. So, let us consider

one such filter based on order statistics.

So, these kind of filters unlike in case of the earlier filters; these filters are nonlinear

filters. So, here in case of this order statistic filters; the response is based on the ordering

of the intensities, ordering of the pixel values in the neighborhood of the point under

consideration. So, what we do is we take the set of intensity values which is in the neighborhood

which are in the neighborhood of the point (x, y), then order all those intensity values

in a particular order and based on this ordering, you select a value which will be put at location

(x, y) in the processed image g and that is how the output image that you get is a processed

image.

But here the processing is done using the order statistics filter. A widely used filter

under this order statistics is what is known as a median filter. So in case of a median

filter, what we have to do is I have an image and what I do is around point (x, y), I take

a 3 by 3 neighborhood and consider all the 9 pixels, intensity values of all the 9 pixels

in this 3 by 3 neighborhood.

Then, I arrange this pixel values, the pixel intensity values in a certain order and take

the median of this pixel intensity values. Now, how do you define the median? We define

the median say zeta of a set of values such that half of the values in the set will be

less than or equal to zeta and the remaining half of the values of the set will be greater

than or equal to zeta.

So, let us take a particular example. Suppose, I take a 3 by 3 neighborhood around a pixel

location (x, y) and the intensity values in this 3 by 3 neighborhood, let us assume that

this is 100, this is a 85, this is a 98, this may have a value 99, this may have a value

say 105, this may have a value say 102, this may have a value say 90, this may have a value

say 101, this may have a value say 108 and suppose this represents a part of my image

say f (x, y).

Now, what I do is I take all these pixel values, all this intensity values and put them in

ascending order of magnitude. So, if I put them in ascending order of magnitude, you

find that the minimum of these values is 85, the next value say 90, and next one is 98,

the next one is 99, the next one is 101, the next one is 102, the next one is 105 and the

next one is 108. So, all these 9 intensity values, I have put in ascending order of magnitude.

Here there will be one more, so there is one more value - 100. So, these are the 9 intensity

values which are put in the ascending order of magnitude. So once I put them into ascending

order of magnitude, from this I take the fifth maximum value which is equal to 100.

So, if I take the fifth maximum value, you find that there will be equal number of values

which is greater than this fifth value; greater than or equal to this fifth number and there

will be same number of values which will be less than or equal to this fifth number. So,

I consider this particular pixel value 100 and when I generate the image g (x, y), in

g (x, y) at location (x, y), I put this value 100 which is the median of the pixel values

within this neighborhood. So, this gives my processed image g (x, y).

Of course, the intensities in other locations in other pixel regions will be decided by

the median value of the neighborhood of the corresponding pixels. That is if I want to

find out what will be the pixel value at this location, then the neighborhood that I have

to consider will be this particular neighborhood.

So, this is how I can get the median filtered output and as you can see that this kind for

filtering operation is based on statistics. Now, let us see that what kind of result that

we can have using this median filter.

So here, you find that it is again on the same building image. The left top is our original

noised image; on the right hand side, it is the smoothed image using box filter and on

the bottom, we have the image using this median filter.

So here again, as you see that the image obtained using the processed image obtained using median

filter operation, maintains the sharpness of the image to a greater extend then that

obtained using the smoothing images.

Coming to the second one, again this is one of the images that we have shown earlier a

noise image having 4 coins. Here again, you find that after doing the smoothing operation,

the edges becomes blurred and at the same time, the noises are not reduced to a great

extends. Still this particular image is noisy.

So if I want to remove all these noise, what I have to do is I have to smooth this images

using higher neighborhood size and the moment I will go for the larger neighborhood size,

the blurring effect will be more and more. On the right hand side, the image that we

have; so this particular image is also the processed image but here the filtering operation

which is done is median filter.

So here, we find that because of the median filtering operation; the output image that

we get, the noise in this output image is almost vanished but at the same time the contrast

of the image or the sharpness of the image remains more or less interact. So, this is

an advantage that you get if we go for median filtering rather than smooth smoothing filtering

or averaging filtering. To show the advantage of this median filtering, we will take another

example.

So, this is the image of the butterfly, a noisy image of a butterfly. On bottom left,

the image that is shown, this is an averaged image where the averaging is done over a neighborhood

of size 5 by 5. On the bottom right is the image which is filtered by using median filter.

So, this particular image clearly shows, this result clearly shows the superiority of the

median filtering over the smoothing operation or averaging operation and such median filtering

is very very useful for a particular kind of noise where the noise is a random noise

which are known as salt and pepper noise because of the appearance of the noise in the image.

So, these are the different filtering operations which reduces the noise in the particular

image or the filtering operations which introduce blurring or smoothing over the image. We will

now consider another kind of spatial filters which increases the sharpness of the image.

So, the spatial filter that we will consider now is called sharpening spatial filter.

So, we will consider sharpening spatial filter. So, the objective of this sharpening spatial

filter is to highlight the details, the intensity details or variation details in an image.

Now, through our earlier discussion, we have seen that if I do averaging over an image

or smoothing over an image, then the image becomes blurred or the details in the image

are removed. Now, this averaging operation is equivalent to integration operation.

So, if I integrate the image, then what I do is what I am going to get is a blurring

effect or a smoothing effect of the image. So, if integration gives a smoothing effect,

so it is quite logical to think that if I do the opposite operation that is instead

of integration, if I do differentiation operation, then the sharpness of the image is likely

to be increased. So, it is the derivative operations or the differentiations which are

used to increase the sharpness of an image.

Now, when I go for the derivative operations, I can use 2 types to derivatives. I can use

the first order derivative or I can also use the second order derivative. So, I can either

use the first order derivative operation or I can use the second order derivative operation

to obtain or to enhance the sharpness of the image. Now, let us see what are the desirable

effects that these derivative operations are going to give.

If I use a first order derivative operation or a first order derivative filter, then the

desirable effect of this first order derivative filter is it must be 0, the response must

be 0 in areas of constant grey level in the image and the response must be non zero at

the onset of the grey level step or or at the onset of a grey level ramp and it should

be non zero along ramps. Whereas, if I use a second order derivative filter; then the

second order derivative filter response should be 0 in the flat areas, it should be non zero

at the onset and end of a grey level step or grey level ramp and it should be 0 along

ramps of constant slope. So, these are the desirable features or the desirable responses

of a first order derivative filter and the desirable response of a second order derivative

filter.

Now, whichever derivative filter I use; whether it is a first order derivative filter or a

second order derivative filter, I have to look for discrete domain formulation of those

derivative operations. So, let us see how we can formulate the derivative operations;

the first order derivative or the second order derivative in discrete domain.

Now, we know that in continuous domain the derivative is given by let us consider a 1

dimensional case that is if I have a function f (x) which is a function of variable x, then

I can have the derivative of this which is given by df (x)/ dx which is given by limit

delta x tends to zero f (x plus delta x) minus f (x) upon delta x. So, this is the definition

of derivative in continuous domain.

Now, when I come to discrete domain; in case of our digital images, the digital images

are represented by a discrete set of points or pixels which are represented at different

grid locations and the minimum distance between 2 pixels is equal to 1.

So, in our case, we will consider the value of delta x equal to 1 and this derivative

operation in case of 1 dimension, now reduces to del f del x is equal to f of x plus 1 minus

f of x. Now, here I use the partial derivative notation because our image is a 2 dimensional

image. So, when I take the derivative in 2 dimensions, we will have partial derivatives

along x and we will have partial derivatives along y. So, the first derivative, the first

order derivative for 1 dimensional discrete signal is given by this particular expression.

Similarly, the second order derivative of a discrete signal in1dimension can be approximated

by del 2 f upon del x 2 which is given by f (x plus 1) plus f (x minus1) minus 2 f (x).

So, this is the first order derivative and this is the second order derivative and you

find that these 2 derivations, these 2 definitions of the derivative operations, they satisfy

the desirable properties that we have discussed earlier. Now, to illustrate the response of

these derivative operations, let us take an example.

So, this is a 1 dimensional signal where the values of the 1 dimensional signals for various

values of x are given in the form of an array like this and the plot of these functional

values, these discrete values are given on the top. Now, if you take the first order

derivative of this as we have just defined, the first order derivative is given in the

second array and the second order derivative is given in the third array.

So, if you look at this functional value, the plot of this functional value, this represents

various regions, Say for example; here, this part is a flat region, this particular portion

is a flat region, this is also a flat region, this is also a flat region. This is a ramp

region, this represents an isolated point, this area represents a very thin line and

here we have a step kind of discontinuity.

So now, if you compare the response of the first order derivative and the second order

derivative of this particular discrete function; you find that the first order derivative is

non zero during ramp, whereas the first order derivative is 0 along a ramp. The second order

derivative is 0 along a ramp; the second order derivative is non zero at the onset and end

of the ramp.

Similarly coming to this isolated point, if I compare the response of the first order

derivative and the response of the second order derivative, you find that the response

of the second order derivative for an isolated point is much stronger than the response of

the first order derivative.

Similar is the case for a thin line. The response of the second order derivative is greater

than the response of the first order derivative. Coming to this step edge, the response of

the first order derivative and response of the second order derivative is almost same

but the difference is in case of second order derivative, I have a transition from a positive

polarity to a negative polarity.

Now, because of this transition from positive polarity to negative polarity, the second

order derivatives normally leads to double lines the moment in case of a step discontinuity

in a image whereas, the first order derivative that leads to a single line. Of course, this

double line, getting this double line; usefulness of this, we will discuss later.

Now, but as we have seen, the first order the second order derivative gives a stronger

response to isolated points or to thin lines and because the details in an image normally

has the property that it will be either isolated points or thin lines to which the second order

gives second order derivative gives a stronger response; so, it is quite natural to think

that the second order derivative based operator will be most suitable for image enhancement

operations.

So, our observation is as we have discussed previously that first order derivative generally

produce a thicker edge because we have seen that during a ramp or along a ramp, the first

order derivative is non zero whereas, the second order derivative along a ramp is 0

but it gives non zero values at the starting of the ramp and the end of the ramp.

So, that is why the first order derivatives generally produce a thicker edge in an image.

The second order derivatives gives stronger response to find it is such as thin lines

and isolated points. The first order derivative have stronger response to gray level step

and the second order derivative produce a double response at step edges and as we have

already said that as the details in the image are either in the form of isolated points

or thin lines; so the second order derivatives are better suited for image enhancement operations.

So, we will mainly discuss about the second order derivatives for image enhancement. But

to use this for image enhancement operation; obviously, because our images are digital

and as we have said in many times that we have to have a discrete formulation of this

second order derivative operations and the filter that will design that should be isotropic.

That means the response of the second order derivative filter should be independent of

the orientation of the discontinuity in the image and the most widely used or the popularly

known second order derivative operator of isotropic nature is what is known as a Laplacian

operator.

So, we will discuss about the Laplacian operator and as we know that the Laplacian of a function

is given by del square f equal to del 2 f del x 2 plus del 2 f del y 2. So, this is

the laplacian operator in continuous domain but what we have to have is the laplacian

operator in the discrete domain. And, as we have already seen that del 2 f

del x 2 in case of discrete domain is approximated as f (x minus1) or f (x plus1) plus f (x minus1)

minus 2 f (x).

So, this is in case of a 1 dimensional signal. In our case, our function is a 2 dimensional

function that is the function of variables x and y. So for this, we can write for 2 dimensional

signal del 2 f del x 2 which will be a simply f of (x plus1, y) plus f of (x minus1, y)

minus 2 f (x, y). Similarly, del 2 f del y 2 will be given by f of (x, y plus1) plus

f of (x, y minus 1) minus 2 f (x, y).

And, if I add these 2, I get the Laplacian operator in discrete domain which is given

by del 2 f is equal to del 2 f del x 2 plus del 2 f del y 2 and we will find that this

will be given as f (x plus 1, y) plus f (x minus 1, y) plus f (x, y plus 1) plus f (x,

y minus 1) minus 4 f (x, y) and this particular operation can be represented again in the

form of a 2 dimensional mask. That is for this Laplacian operator, we can have a 2 dimensional

mask and the 2 dimensional mask in this particular case will be given like this.

So, on the left hand side, the mask that is shown, this mask considers the Laplacian operation

only in the vertical direction and in the horizontal direction and if we also include

the diagonal directions, then the Laplacian mask is given on the right hand side. So,

we find that using this particular mask which is shown on the left hand side, I can always

derive the expression that we have just shown.

Now, here I can have 2 different types of mask. Depending upon polarity of the coefficient

at the center pixel, I can have the center pixel to have a polarity either negative or

positive. So, if the polarity is positive, same of the center coefficient; then I can

have a mask of this form where the center pixel will have a positive polarity but otherwise

the nature of the mask remains the same.

Now, if I have these kinds of operation, then you find that the image that you get that

will have that will just highlight the discontinuous regions in the image whereas, all the smooth

regions in the image will be suppressed. So, this shows an original image.

On the right hand side, we have the output of the laplacian and if you closely look at

this particular image, you will find that all the discontinuous regions will have some

value. However this particular image cannot be displayed properly.

So, we have to have some scaling operation because I will have say gamma H greater than

1 and gamma L less than 1. This will amplify all the high frequency components that is

the contribution of the reflectance and it will attenuate the low frequency components

that is contribution due to the illumination. Now, using this time of type of filtering,

the kind of result that we get is something like this.

Here, on the left hand side is the original image and on the right hand side is the enhanced

image and if you look in the boxes, you find that

many of the details in the boxes which are not available in the original image is now

available in the enhanced image. So, using such homomorphic filtering, we can even go

for this kind of enhancement or the illumination, the contribution due to illumination will

be reduced; so even in the dark areas, we can take out the details.

So with this, we come to an end to our discussion on image enhancements. Now, let us go to some

questions of our todays lecture.

The first question is a digital image contains an unwanted region of size 7 pixels. What

should be the smoothing mask size to remove this region? Why Laplacian operator is normally

used for image sharpening operation? Third question - what is unsharp masking? Fourth

question - give a 3 by 3 mask for performing unsharp masking in a single pass through an

image. Fifth, state some applications of first derivative in image processing.

Then, what is ringing? Why ideal low pass and high pass filters lead to ringing effects?

How does blurring vary with cut off frequency? Does Gaussian filter lead to ringing effect?

Give the transfer function filter and what is the principle of homomorphic filter?

Thank you.

The Description of Lecture - 20 Image Enhancement