'Why do edge detection filters sum to 0 whereas blur filters sum to 1?

I am now learning about filters in computer vision. I can see that the elements of the kernel for edge detection sum to 0, whereas for blurring sum to 1.

I am wondering, does it have to do with the fact that the one is a high-pass and the other is a low-pass filter? Is there some kind of rule or explanation?

Thanks in advance!



Solution 1:[1]

Blur filters must preserve the mean image intensity. This is why their kernels sum to 1. If you look at their frequency response, you’ll see that the zero-frequency component (DC component) is 1. This component is the sum over the kernel. And it being 1 means that the DC component of the image is not modified when applying the convolution. Yes, this is a property of any low-pass filter. Modifying the zero frequency means you don’t let low frequencies pass unaltered.

What you call edge detection filters are really estimators of the derivative. They add to zero because of the definition of the derivative: the slope at any one point does not depend on how high up that point is. Adding or subtracting a constant from the function (or image) will not change the derivative, the derivative of I and I+1 are the same. Therefore the derivative filter cannot preserve the mean image intensity: you’d get a different result for dI/dx and for d(I+1)/dx, which would not make sense.

The Laplace filter (not an edge detector) is a generalized second order derivative, the same reasoning as above applies.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1