# GiSHEO Processing TOOLS

#### Table of Contents

## Multispectral Image Definition : EMT+

**band 1**: Blue 0.45 - 0.52 µm**band 2**: Green 0.52 - 0.60 µm**band 3**: Red 0.63 - 0.69 µm**band 4**: Near Infrared 0.76 - 0.90 µm**band 5**: Middle Infrared 1.55 - 1.75 µm**band 60**: Thermal Infrared low gain 10.4 - 12.5 µm**band 61**: Thermal Infrared high gain 10.4 - 12.5 µm**band 7**: Middle Infrared 2.08 - 2.35 µm**band 8**: Panchromatic 0.52 - 0.90 µm

## Image Conversion

### Grayscale Conversion

**aim**: convert a color image (RGB) to a gray one**tool**: grayscale**input image**: color image (3 channels)**output image**: gray image**parameters**: none**method**: Output(i,j)=0.299*InputR(i,j)+0.587*InputG(i,j)+0.114*InputB(i,j)**OpenCV function**: cvCVTColor(input,output,CV_RGB2GRAY)**implementation**: open color image in opencv with grayscale flag and save image

### Grayscale to Color Conversion

**aim**: convert a gray image to a color one (RGB)**tool**: colorate**input image**: gray image (it can handle any image, process enforces grayscale)**output image**: color image (3 channels)**parameters**: grayscale to color mapping file (input_pal)**method**: Assign a color to each tone of gray based on a mapping found in input_pal file

### Binarization

**aim**: convert a gray image to a color one (RGB)**tool**: thresholding, thresholding_inv**input image**: any image, process enforces grayscale**output image**: black and white image**parameters**: threshold, max_val**parameters range**: threshold and max_val in [0-255]**method**:- thresholding: Any tone of gray higher than threshold takes max_val value, where all other tones are set to 0
- thresholding_inv: Any tone of gray lower than threshold takes max_val value, where all other tones are set to 0

## Image Transformation

### Contrast Enhancement

#### Linear Contrast Stretching

**aim**: transform the values of each pixel by applying a linear transform (y=a*x+b);**tool**: stretch**remarks**: useful to bring the pixels values in a desired range (e.g. [0,255]) after transformation which change the values range; enhance the contrast but less than histogram equalization sensitive to outlier values**input image**: gray/color image**output image**: transformed gray/color image**parameters**: coefficients of the linear transform (slope and intercept)**parameters range**: unbounded**default values**: a=1, b=0**method**: Output(i,j)=a*Input(i,j)+b**details**: in order to transform pixel values from range[umin,umax] to the range [vmin,vmax] the coefficients are:- a=(vmax-vmin)/(umax-umin) and b=vmin-a*umin
- for color image the same transform can be applied to each channel
**particularity**: for range[0,255] coefficients are:- a=255/(umax-umin) and b=-255*umin/(umax-umin)

**OpenCV function**: cvConvertScale(input,output,a,b)

#### Histogram Equalization

**aim**: redistribute the values of pixels such that their distribution is closer to a uniform one; this allows for areas of lower local contrast to gain a higher contrast without affecting the global contrast.**tool**: histogram_equalization**remarks**: useful to increase the global contrast, especially in images with backgrounds and foregrounds that are both bright or both dark; the main disadvantage is that it may increase the contrast of background noise, while decreasing the useful signal.**input image**: gray/color image**output image**: transformed gray/color image**parameters**:**zero**: Parameter for eliminating tones from equalization (set to 1 to eliminate black)

**method**:**step 1**: compute the relative frequencies (pdf - probability density function) for all possible values of pixels (e.g. in [0,255])*df(v)=size(v)/size(input)*where size(v) is the number of pixels having the value v and size(input) is the total number of pixels in the image; in case the parameter zero is different than 0 we take into consideration only the values that exist.**step 2**: compute the cumulative frequencies (cdf - cumulative distribution function):*cdf(v)=df(0)+df(1)+ … +df(v)*; in case the parameter zero is different than 0 we take into consideration only the values that exist.**step 3**: transform the image such that the cumulative distribution function of the transformed image is as close as possible to a linear function (this would correspond to an almost uniform histogram); this step generates an intermediate image having values in [0,1]*y(i,j)=cdf(input(i,j))***step 4**: apply a linear transformation on y such that the values are again in the range of values in the input image:*output(i,j)=y(i,j)*(max-min)+min*wher min and max are the minimal and maximal value in the input image

**remarks**:- can reduce the color depth; better to be applied with images with color depth larger than 8
- for image color the same transformation can be applied to each channel;

**OpenCV function**: cvEqualizeHist(input,output)

### Image Composition

**aim**: merge three one banded images into one three banded geo-tiff**tool**: merge**input image**: one banded geo-tiff**output image**: color image (three banded geo-tiff)**parameters**: input_band1 (blue), input_band2 (green), input_band3 (red)

### Image Extraction

**aim**: extract bands out of multi-banded images**input image**: multi-banded geo-tiff**output image**: one banded geo-tiff**tools**: extract_band, extract_bands**parameters for extract_bands**: bands: values separated by comma

### Filters

### Smoothing (Median, Gauss, Bilateral)

**aim**: smooth (blur) the image in order to eliminate the noise and prepare it for further processing (e.g. edge detection, segmentation); the most used is the gaussian filter but it does not preserve very well the edges; a variant which preserves the edges is the bilateral filter;**remarks**: most variants are based on a convolution with a kernel having specific values (as the kernel matrix is larger the smothing effect is stronger); the main problem with smoothing is that it can "destroy" the edges**input image**: gray/color image**output image**: transformed gray/color image- parameters: specific to the filter
**method**: Output=convolve(Input, Kernel)*Output(i,j)=sum_{k in [-d,d]} sum_{l in [-d d]} Input(i+k,j+l)*Kernel(k,l)*

**Details:**

##### Median filter

- each pixel in the output image is the median of pixel values in the neigborhood defined by the kernel matrix; there is not a convolution operation; it is appropriate for "salt and pepper" noise
**tool**: median_smoothing**parameters**: size of the neighborhood (kernel)

##### Gaussian filter

- the elements of the kernel matrix are values computed using the Gauss function:
*K(k,l)=exp(-(k*; thus the value of an output pixel will be a weighted average of the pixels in its neighborhood (unlike the uniform filter which is based on the arithmetic mean)^{2+l}2)/(2*sigma^{2)/(sigma}2*2*pi) **tool**: gaussian_smoothing**parameters**: size of the neighborhood (kernel), parameter sigma**remark**: this bidimensional filter can be decomposed in two onedimensional filters (orizontal and vertical) with different values of the parameter sigma

##### Bilateral filter

- similar to the Gaussian filter but the weight (elements of the kernel matrix) depends not only on the distance between the position of pixels but also on the distance between their values (thus neighboring pixels with similar values will have a high weight while those with values different from the processed pixel will have a small weight). The kernel matrix can be seen as being variable
K(i,j,k,l)=exp(-(k^2+l^2)/(2*sigma_p^2)/(sigma_p^2*2*pi)*exp(-(input(i+k,j+l)-input(i,j))^2/(2*sigma_v^2)/(sigma_v^2*2*pi)

**tool**: bilateral_smoothing**parameters**: size of the neighborhood (kernel), parameter sigma_p (similar to that of the Gaussian filter), parameter sigma_v (the larger

this second parameter is, the broader is the range of intensities that will be included in the smoothing, and thus the more extreme a discontinuity must be in order to be preserved)

**OpenCV function**: cvSmooth(input, output, method, param1, param2, param3, param4)**method**: CV_BLUR (uniform), CV_BLUR_NO_SCALE (uniform without normalization), CV_MEDIAN (median), CV_GAUSS (gaussian), CV_BILATERAL (bilateral)*CV_BLUR, CV_BLUR_NO_SCALE, CV_MEDIAN*: param1 and param2 defines the size of the neighborhood (kernel matrix)**parameters range**: param1, param2: {3,5,7,9,11,…}**default values**: param1=3, param2=3

*CV_GAUSS*: param1 and param2 defines the size of the neighborhood (kernel matrix), param3 defines the value of sigma, param4 is used for asymetric filters and correspond to sigma_y while param3 will correspond to sigma_x**parameters range**:- odd natural values: {3,5,7,9,11,…}
- param3, param4: valori strict pozitive

**default values**: param1=3, param2=3, param3=1, param4=1

- CV_BILATERAL: param1 corresponds to sigma_p (sigma in color space) while param2 (sigma in pixels space) corresponds to sigma_v; the neighborhood is alyays 3X3
**parameters range**: param1 in {1,255}, param2 in (1, min(ImageWidth?, ImageHeight?))**default values**: param1=20, param2=3

**remarks:**- simple blur and Gaussian blur support 1- or 3-channel, 8-bit and 32-bit floating point images. These two methods can process images in-place.
- Median and bilateral filters work with 1- or 3-channel 8-bit images and can not process images in-place.

#### Edge detection (Sobel, Canny)

**aim**: enhance/identify the edges in an image based on the gradient values; most of the methods are based on a convolution with a kernel having specific values**input image**: gray image**output image**: gray (Sobel) /black and white (Canny) image**parameters**: specific to the filter

**Methods:**

##### Sobel

**description**: apply a convolution with a kernel matrix having elements such that estimation of the local directional gradients are computed; the kernel matrix can be used such that specifically oriented edges (e.g. horizontal, vertical) are detected**tool**: sobel**OpenCV functions**: cvSobel(input, output, xorder, yorder, kernelSize)- here xorder and yorder can take values in {0,1,2} and specify the derivative order for each direction (0 means no derivative at all) and kernelSize should be an odd value specifying the number of rows (and columns) of the kernel matrix

**parameters**: param1 (xorder), param2 (yorder), param3 (kernelsize)**parameters range**: param1, param2 in {0,1,2}, param 3 in {3,5,7,9,…} (odd natural values)**default values**: param1=1, param2=1, param3=3

##### Canny

**description**: it is a sequence of operations which produces a black/white image containing the edges (in white) in the original image; it usually consists of several steps:- smoothing by a gaussian filter;
- computing gradient information on four directions (0,45,90 and 135 degrees) by using Sobel filters (it generates a gray image);
- identify potential edge pixels (by generating a new black/white edge image) by finding local maxima on each direction; the local maxima are identified in a 3x3 neighbourhood
- apply a "histeresis" thresholding on the gradient image by using the edge image two threshold values: tmin and tmax; all edge pixels having gradient values larger than tmax are considered to be genuine edge pixels; all edge pixels having gradients smaller than tmin are removed; all other edge pixels are kept only if they are related to a selected edge pixel; the threshold values depend on the image and on the quality of edges; large values of tmax will detect only the obvious edges while small values will identify even edges corresponding to noise; the ratio between tmax and tmin should be 2:1 or 3:1;

**tool**: canny**OpenCV functions**: cvCanny(input,output,tmin,tmax,kernelSize) where tmin and tmax are the thresholds and kernelSize should be an odd value specifying the number of rows (and columns) of the kernel matrix (it is the size of the kernel used by the Sobel filter). cvCanny does not apply the initial gaussian filter.**parameters**: param1 (tmin), param2 (tmax), param3 (kernelsize)**parameters range**: param1, param2 in {0,…,255} (param1<param2) , param3 in {3,5,7,9,…} (odd natural values)**default values**: param1=20, param2=50, param3=3

### Emboss

**aim**: technique where each pixel of an image is replaced either by a highlight or a shadow, depending on light/dark boundaries on the original image**input image**: any image, process enforces grayscale**output image**: grayscale**description**: emboss the image**tool**: emboss

### Morphological Operations (Dilation, Erosion, Closing, Opening)

#### Dilation

**aim**: expanding the shapes (dark ones) contained in the input image**input image**: any image**output image**: same type as input image**description**: apply a dilation to the image**tool**: dilate

#### Erosion

**aim**: reducing the shapes (dark ones) contained in the input image**input image**: any image**output image**: same type as input image**description**: apply an erosion to the image**tool**: erode

#### Closing

**aim**: remove small holes in the foreground, changing small islands of background into foreground**input image**: any image**output image**: same type as input image**description**: apply an erosion of the dilation of the image**tool**: closing

#### Opening

**aim**: remove small objects from the foreground (usually taken as the dark pixels) of an image, placing them in the background**input image**: any image**output image**: same type as input image**description**: apply a dilation of the erosion of the image**tool**: opening

### Custom Filter

**aim**: apply a custom convolution filter on the image**input image**: gray/color image**output image**: transformed gray/color image- parameters: height of the kernel, width of the kernel, elements
**method**: Output=convolve(Input, Kernel)*Output(i,j)=sum_{k in [-d,d]} sum_{l in [-d d]} Input(i+k,j+l)*Kernel(k,l)***tool**: custom_filter

## Image Analysis

### Spectral Indices

#### Ndvi

**aim**: Normalized Difference Vegetation Index (NDVI) is a simple numerical indicator that can be used to analyze remote sensing measurements, typically but not necessarily from a space platform, and assess whether the target being observed contains live green vegetation or not**input image**: Nir band and Red band**output image**: transformed grayscale image**method**: NDVI=(NIR-RED)/(NIR+RED)

#### Ndwi

**aim**: Normalized Difference Water Index (NDWI) is a simple numerical indicator that can be used to analyze remote sensing measurements, typically but not necessarily from a space platform, and assess whether the target being observed contains water or not**input image**: Middle infrared band 7 and Blue band**output image**: transformed grayscale image**method**: if (BLUE-BAND7)/(BLUE+BAND7) > 0.5 then NDWI = 255 else NDWI = 0

#### Ndwi_threshold

**aim**: Normalized Difference Water Index (NDWI) is a simple numerical indicator that can be used to analyze remote sensing measurements, typically but not necessarily from a space platform, and assess whether the target being observed contains water or not**input image**: Nir band or Middle infrared band 7**output image**: transformed grayscale image**method**: NDWI=cvThreshold(source, source, 30, 255, CV_THRESH_BINARY_INV)

#### Rvi

**aim**: Ratio Vegetation Index (RVI) is a simple numerical indicator that can be used to analyze remote sensing measurements, typically but not necessarily from a space platform, and assess whether the target being observed contains live green vegetation or not**input image**: Nir band and Red band**output image**: transformed grayscale image**method**: RVI=NIR/RED

#### Nrvi

**aim**: Normalized Ratio Vegetation Index (NRVI)**input image**: Nir band and Red band**output image**: transformed grayscale image**method**: NRVI=(RVI-1)/(RVI+1)

#### Ttvi

**aim**: Thiam's Transformed Vegetation Index (TTVI)**input image**: Nir band and Red band**output image**: transformed grayscale image**method**: TTVI=(ABS(NDVI + 0.50)) ½

### Structures Detection

#### Contours

**aim**: find contours in the input image**input image**: inverted threshold (threshold value=70) or ndwi image**output image**: color image**tool**: contours**OpenCV functions**: cvFindContours( CvArr?* img, CvMemStorage?* storage, CvSeq?** firstContour, int headerSize=sizeof(CvContour?), CvContourRetrievalMode? mode=CV_RETR_LIST, CvChainApproxMethod? method=CV_CHAIN_APPROX_SIMPLE );**parameters**: ndvi file

#### Contours (discrimination between lakes and rivers)

**aim**: find contours in the input image and divide them into possible lakes and rivers based on the area and perimeter**input image**: inverted threshold (threshold value=70) or ndwi image**output image**: color image**description**: after the contours have been found with cvFindContours, they are divided into possible lakes and rivers based on the ratio of the area and perimeter of the contour (if it is smaller than the discriminant than it is a river otherwise it is a lake)**tool**: contours**OpenCV functions**: cvFindContours( CvArr?* img, CvMemStorage?* storage, CvSeq?** firstContour, int headerSize=sizeof(CvContour?), CvContourRetrievalMode? mode=CV_RETR_LIST, CvChainApproxMethod? method=CV_CHAIN_APPROX_SIMPLE );**parameters**: ndvi file, discriminant**parameters range**: discriminant in {0,…,255}

#### Hough transform

**aim**: allows to identify parameterized curves in an image by searching for elements in the parameter space which are associated to a significant number of active pixels in the edge image; most variants are based on using a structure called accumulator having entries for all possible values in the parameter space; it is a time and memory consuming operation; there both deterministic and probabilistic variants**input image**: black/white edge image (obtained by applying an edge detecting filter)**output image**: list of elements identifying the curves (depends on the implementation)**parameters**: specific to the variant

**Variants:**

**lines detection****tool**: hough_lines**OpenCV function**: lines = cvHoughLines2(input, storage, method, dRho, dTheta, threshold, param1, param2);**lines**: structure containing information on the detected lines (depending on the method); in the case of the deterministic method each element is a pair containing the distance between the line and the origin (rho) and the angle between the line and Ox (theta); in the case of the probabilistic method each element is a pair of points corresponding to the extremities of the line segment**method**: deterministic (CV_HOUGH_STANDARD) or probabilistic (CV_HOUGH_PROBABILISTIC)**drho**: step used in scanning the range of values of the parameter rho**dtheta**: step used in scanning the range of values of the parameter theta**threshold**: minimal value of accumulator which allows to decide that the corresponding parameter define a line**param 1**: minimum line length detected in the probabilistic variant**param 2**: maximal gap between line segments to join used in the probabilistic variant**parameters range**:- drho in {1,2,…,min(imWidth,imHeight)
- dtheta in [0,Pi]
- threshold, param1, param2 in {1,2, …, sqrt(imWidth
^{2+imHeight}2)

**default values**:- drho=1
- dtheta=PI/180
- threshold=50
- param1=50
- param2=10

**circles detection****tool**: hough_circles**OpenCV function**: circles=cvHoughCircles(input, storage, method, dp, min_dist, param1=100, param2=100, min_radius=0, max_radius=0)**circles**: structure containing the list of parameters corresponding to all identified circles; each element in the list is a triple containing: (x coordinate of the center, y coordinate of the center, radius) storage: intermediate memory storage*dp*: parameter defining the accumulator resolution; the accumulator size is image size divided by dp*min_dist*: the minimal distance between two circles; it should be related to the radii of the circles*param1*: the maximal threshold used in the Canny filter (implicitely called by cvHoughCircles); the minimal threshold is set to half of the maximal one*param2*: threshold for the accumulator value; if the value in the accumulator is larger than this threshold then the corresponding point is considered to be a candidate circle center; if the value of this threshold is high then some circles could be missed and if it is small than false circles are detected; it should be related with the perimeter of the circles to be detected*min_radius*: the minimal value of the radius*max_radius*: the maximal value of the radius

**parameters range**:- dp≥1
- mindist in {2,3,…,min(imWidth,imHeight)/2}
- param1 in {1,…,255}
- param2 in {1,…,min(imWidth,imHeight)/2}
- min_radius in {1,…, min(imWidth,imHeight)/2}
- max_radius in {1,…, min(imWidth,imHeight)/2} (min_radius<max_radius)

**default values**:- dp=1
- param1=50
- param2=200
- min_radius=50
- max_radius=100

### Classification

#### Segmentation (pyramidal, kmeans)

**aim**: identify almost homogeneous regions in an image by assigning a value to all pixels in such a region; it allows to discriminate between regions with different textures and identify objects in the image; the output image is usually characterized by a smaller number of values.

**input image**: gray/color image**output image**: gray/color image**parameters**: specific to the method

**Methods:**

- pyramidal segmentation based on a sequence of gaussian filtering+downsampling (which eliminates all even rows and columns) followd by a sequence with the same langth of up-sampling (0 rows and columns insertions on even positions)+gaussian
**tool**: segmentation**OpenCV function**: cvPyrSegmentation(input,output,storage,components,level,threshold1, threshold2)- storage: structure used to store intermediate images
- components: structure containing the "segments"identified in the image
- level: number of steps in down(up)-sampling; the image width and height should be divisible by 2
^{level} - threshold 1
- threshold 2

- k-means clustering algorithm is commonly used in computer vision as a form of image segmentation. The results of the segmentation are used to aid border detection and object recognition. In this context, the standard Euclidean distance is usually insufficient in forming the clusters. Instead, a weighted distance measure utilizing pixel coordinates, RGB pixel color and/or intensity, and image texture is commonly used.
**tool**: kmeans**OpenCV function**: void cvKMeans2( const CvArr?* samples, int numClusters, CvArr?* clusterIdx, CvTermCriteria? termcrit );

### Change Detection

#### Difference

**aim**: compute the difference between two images**input image**: any image, the process enforces grayscale**output image**: grayscale image**tool**: diff**OpenCV functions**: cvSub(im1_float,im2_float,diff,NULL);

#### Modification Detection

**aim**: emphasize modification on a difference image**input image**: grayscale image output of Difference process**output image**: grayscale image**tool**: modified**OpenCV functions**: cvSub(im1_float,im2_float,diff,NULL);