Monday, May 20, 2019

Lab 8: Spectral Signature Analysis & Resource Monitoring

Goal

The goal of this lab is to become familiar with spectral reflectance signatures of different materials on and near the Earth's surface. Health monitoring of vegetation and soil will also be introduced using NDVI and ferrous mineral indices.

Methods

Part 1: Spectral Signature Analysis

Spectral signatures can be collected either in the field or from software like ERDAS Imagine. In this lab, the spectral signatures from the image below of Eau Claire and Chippewa counties will be taken using ERDAS Imagine.


Figure 1: Image from which spectral signatures are taken in this lab

First the polygon tool was used from the drawing menu to create a small polygon completely within the water (not including any land edges) of Lake Wissota. This created an area of interest (AOI) from which a signature could be taken shown in Figure 2. 

Figure 2: Polygon created as AOI in open water of Lake Wissota

The "Supervised" window was opened and "Signature Editor" was selected. The "Create New Signature from AOI" was selected and the signature from the polygon above was automatically added to the list. The name and color were changed to reflect the source of the signature.

Figure 3: Signature Editor

This process was followed for running water, deciduous forest, riparian forest, coniferous forest, crops, dry soil, moist soil, rock, asphalt highway, airport runway, and parking lot for a total of 12 signatures.

To display a chart, "Display Mean Plot Window" was selected from the Signature Editor. Chart options were changed so that chart background was set to white for better visibility of the signature. 

Part 2: Resource Monitoring

Section 1: Vegetation Health Monitoring

Vegetation health monitoring is often done using the Normalized Difference Vegetation Index (NDVI). The equation for NDVI is:
 
The NDVI for this lab was done on the image of Eau Claire and Chippewa Counties below: 

Figure 3: Image of Eau Claire and Chippewa Counties used for NDVI

To use NDVI in ERDAS Imagine, it is selected from the dropdown menu for "Unsupervised" in the Raster tab. The specifications for this lab are shown in Figure 4.

Figure 4: Specifications for NDVI

A map was then created and symbolized appropriately in ArcMap, which can be seen in the results section.

Section 2: Soil Health Monitoring

Soil health monitoring is often done using the Ferrous Mineral Index (FM). The equation for FM is:
The same image in Figure 3 was also used for the Ferrous Mineral Index. To use FM in , it is selected from the dropdown menu for "Unsupervised" in the Raster tab. The specifications for this lab are shown in Figure 4.

Figure 5: Specifications for FM

A map was then created and symbolized appropriately in ArcMap, which can be seen in the results section.

Results

The spectral signatures for 12 individual features are shown below. Band 1 corresponds with blue, band 2 with green, band 3 with red, band 4 with NIR, and bands 5 and 6 with MIR.
Figure 6: Spectral signature of standing water

Figure 7: Spectral signature of running water

Figure 8: Spectral signature of deciduous forest

Figure 9: Spectral signature of coniferous forest

Figure 10: Spectral signature of riparian forest

Figure 11: Spectral signature of crops

Figure 12: Spectral signature of dry soil

Figure 13: Spectral signature of moist soil

Figure 14: Spectral signature of rock

Figure 15: Spectral signature of asphalt highway

Figure 16: Spectral signature of airport runway

Figure 17: Spectral signature of parking lot

In order to compare the spectral signatures of moist and dry soil, they were put onto the same chart using that option. As expected due to the high absorbance of water, there was higher reflectance in the dry soil.

Figure 18: Comparison of moist and dry soil spectral signatures

An even broader comparison was made of all 12 spectral signatures together. They follow what is to be expected. Standing and moving water vary slightly in the visible spectrum but still share similar signatures that decrease in reflectance as wavelength increases. Rock, airport runway, and dry soil have similar signatures, probably because they are composed of similar materials. Riparian forest and deciduous forest are similar, likely because the riparian forest contains mostly deciduous trees. Forest in general is unique in that its spectral signature is low in the visible and spikes in NIR. All other surfaces tend to dip in the NIR, especially dry, rocky surfaces. Water also has a unique spectral signature that has some reflectance in the visible and dips to almost no reflectance in NIR. Other signatures have more ups and downs, especially in NIR. Athough similar, riparian/deciduous forest and coniferous forest are different due to the surface area of the needles/leaves, the shape of the canopy, and the direction of the branches.
Figure 19: All 12 spectral signatures

The NDVI gives insight into areas of high and low vegetation, as seen by the map in Figure 13. Areas of high vegetation are brighter because compared to other spectral signatures, there is high NIR reflectance and low red reflectance. Areas that are medium gray and black have no or almost no vegetation. They are mostly water, which absorbs most of the NIR band.

Figure 13: NDVI map of Eau Claire and Chippewa Counties

The FM shows areas of high and low ferrous mineral content, shown by Figure 14. Most of the surface ferrous minerals are west of Lake Wissota in Chippewa Falls and lie mainly in what looks like uncultivated agricultural fields. Urban areas lack ferrous minerals on the surface and water has low minerals.
Figure 14: Ferrous Mineral map of Eau Claire and Chippewa Counties

Sources

Earth Resources Observation and Science Center, United States Geological Survey. (2019). [Satalllite images of Eau Claire and Chippewa counties]. 

Wilson, C. (2019). Spectral signature analysis and resource monitoring. Geog 338: Remote Sensing of the Environment Lab 8.

Lab 7: Photogrammetry

Photogrammetry

Goal

The purpose of this lab was to become acquainted with using tools to turn distorted aerial and satellite images into photogrammetrically correct images. Mathematics including measurements of areas and perimeters, as well as relief displacement will be used. Stereoscopy and orthorectification will also be introduced.

Methods

Part 1: Scales, Measurements, and Relief Displacements

Section 1: Calculating scale of nearly vertical aerial photographs

The image below was used to practice scale calculation. With a known ground distance (8822.47 ft), the photo distance from A to B can be measured with a ruler to obtain the scale of the photo. The equation looks like:
2.625”/8822.47’ = 2.625”/105,869.64” = 1/40,331.29
S = 1 : 40,331”
Figure 1: Image with known ground distance and photo distance measured with ruler.

Scale can also be determined using flying altitude above sea level, elevation of land above sea level, and focal lens length. The scale of the image below was found by using the following equation:
152 mm/ (20,000-796) ft = 152 mm/19,204 ft = 152/5853379.2 = 1/38509
S = 1 : 38,509 mm
Figure 2: Image with known altitude above sea level, elevation of land above sea level, and focal lens length.

Section 2: Measurement of areas on aerial photographs

Sometimes, the areas and perimeters of features must be measured. ERDAS Imagine make this easy by calculating these from polygons drawn on images, with the tools highlighted and measurements recorded as shown in Figure 3. 

Figure 3: Area and perimeter calculation using the measurement tool in ERDAS Imagine

Section 3: Calculating relief displacement from object height

Relief displacement is caused by distance from the principle point of an image. The relief displacement of the image below was calculated using the following equation: 
0.5” of displacement
Real world height = 3209 x 0.5 = 1604.5”
10.4375” away from PP
RD = (1604.5 x 10.4375) / 3980 = 4.21”

Figure 4: Relief displacement of smokestack (labeled "A"). 

Part 2: Stereoscopy

Section 1: Creation of anaglyph image with the use of a digital elevation model (DEM)

Anaglyphs allow viewers of an image to see a 3 dimensional images with the help of polaroid glasses. One way of achieving this is by using a DEM along with a 2 dimensional photograph. The Anaglyph tool in the Terrain window was used. The specifications below were used in the dialogue box before running the model:

Figure 5: Specifications for DEM anaglyph

Section 2: Creation of anaglyph image with the use of a LiDAR derived surface model (DSM)

Another way of achieving this is by using a DEM along with a 2 dimensional photograph. The Anaglyph tool in the Terrain window was used again. The specifications below were used in the dialogue box before running the model:

Figure 6: Specifications for LiDAR anaglyph

Part 3: Orthorectification

Section 1: Create a new project

The images below were taken by the SPOT satellite and require orthorectification.

Figure 7: SPOT satellite images in need of orthorectification

From the help menu, a search was done for "photogrammetry" and "Photogrammetric Project" was selected to begin a new block file project.

"Polynomial-based pushbroom" and "SPOT Pushbroom" were selected from the dropdown menu for Geometric Model Category.

In the Block Property Setup, the Horizontal Reference Coordinate System was set to the following specifications:

Projection Type: UTM.
Spheroid Name: Clarke 1866.
Datum Name: NAD27(CONUS).
UTM Zone field: 11.
North or South Field:  North.
Axis Order: E,N.
Horizontal Units: Meters.

Section 2: Add imagery to the block and define sensor model

The top image in Figure 7 was added to the block file by selecting "Add".
The parameters of the satellite were specified as "SPOT PAN" in "Interior Orientation". Specifications are shown below:

Figure 8: Specifications for Interior Orientation

Section 3: Activate point measurement tool and collect GCP's

"Classic Point Measurement" was selected from the "Point Measurement Tool." With the point measurement window opened, as shown below, the reference layer was set to a previously orthorectified image and the checkbox for "Use Viewer as a Reference" was clicked. 
Figure 9: Point measurement window with orthorectified image set as reference layer (left)

At this point GCP's could begin to be collected. The Select Point tool was used to move the inquire box as appropriate. The "Add" button was selected for point 1 on the reference image and subsequently for point 1 on the distorted image. X and Y reference coordinates as well as X and Y file coordinates were provided by the lab to ensure accuracy. This process was completed for 9 points.

Another horizontal reference source was brought in for the last 2 points by selecting "Reset Horizontal Reference Source." The same GCP selection process was used for this source for points 11 and 12.

The Point Measurement was saved.

The Vertical Reference Source dialogue was opened and set to a DEM of Palm Springs. All the previously gathered points were highlighted and their Z values were updated to the reference DEM, as shown below.

Figure 10: Z values updated from DEM of Palm Springs

Section 4: Set Type and Usage, add a 2nd image to the block & collect its GCP's

The Type and Usage columns (shown also in Figure 10) was right clicked for each point and updated to "Full" and "Control", respectively, shown below. The Point Measurements were then saved and closed.

Figure 11: Type and Usage columns updated to "Full" and "Control"

The same GCP collection process was done for the second image to be orthorectified shown in Figure 7 by clicking "Add Images" in the contents window. Interior Orientation was set the same as with the previous image. 

Figure 12: Both SPOT images in Point Measurement viewer

A total of 11 points were collected.

Section 5: Automatic tie point collection, triangulation and ortho resample

The Automatic Tie Point Generation Properties icon was selected from the Point Measurement tool. The specifications were set to the following:
General Tab:
Image used: All Available
Initial Type: Exterior/Header/GCP
Image Layer Used: 1

Distribution Tab:
Intended Number of Points/Image: 40
Keep All Points: unchecked

The model was run and tie point accuracy was checked. The Point Measurements were then saved and closed.

The Block Triangulation Properties window was opened. The specifications were set to the following:
General Tab:
Iterations With Relaxation: 3
Image Coordinate Units for Report: Pixels

Point Tab:
Ground Point Type and Standard Deviations: Same Weighted Values
X, Y, and Z: 15

Advanced Options Tab:
Simple Gross Error Check Using: checked
Times of Unit Weight: 3

The triangulation was run and the Triangulation Summary Dialogue was saved.

Start Ortho Resampling Process was clicked to begin orthorectification. The specifications were set to the following:
General Tab:
DTM source: DEM
Output Cell Sizes: 10

Advanced Tab:
Resampling Method: Bilinear Interpolation

Add Single Output was selected and the second SPOT image was set as the input image. Current cell sizes were used. The ortho resampling process was run and completed.

Results

Part 1: Scales, Measurements, and Relief Displacements

Section 1: Calculating scale of nearly vertical aerial photographs

It is relatively rare to have a ground distance measurement to determine scale with. But when available, the photo distance is divided by the ground distance to obtain the scale. The scale of the image for this section was S = 1 : 40,331”.

If a ground distance is not known, scale can also be calculated from flying altitude above sea level, elevation of land above sea level, and focal lens length. The scale from the image was S = 1 : 38,509 mm.

Section 2: Measurement of areas on aerial photographs

The polygon in figure 3 was drawn and the perimeter and area of it were automatically computed. They were (in different units, with could be chosen from a dropdown list):
Area: 38.10 hectares, 94.14 acres
Perimeter: 4116.38 meters, 2.56 miles

Section 3: Calculating relief displacement from object height

The calculation of relief displacement is used to correct those distortions. Using the final RD of 4.21”, it can be determined that it will need to be a negative correction to account for the positive RD.

Part 2: Stereoscopy

Section 1: Creation of anaglyph image with the use of a digital elevation model (DEM)

This type of anaglyph image requires aerial imagery and a DEM. The resulting anaglyph below can be viewed using polaroid glasses. 

Figure 13: Anaglyph using DEM

Section 2: Creation of anaglyph image with the use of a LiDAR derived surface model (DSM)

This type of anaglyph image requires aerial imagery and a LiDAR surface model. The resulting anaglyph below can be viewed using polaroid glasses.

Figure 14: Anaglyph using LiDAR surface model

Part 3: Orthorectification

The orthorectification process for two images taken by a spot satellite was very successful. Below is an image of both images together. Features are aligned well, which can be seen especially well using the swipe tool.
Figure 15: Result of orthorectification of two previously distorted images

Sources

Wilson, C. (2019). Photogrammetry. Geog 338: Remote Sensing of the Environment Lab 7. 

Friday, April 5, 2019

Miscellaneous Image Functions Using ERDAS IMAGINE



Goal/background

     The goal of this lab was to practice and become familiar with a variety of miscellaneous image functions. This includes (1) creating subsets from a larger image scene, (2) performing a pan sharpen, (3) perform a haze reduction, (4) link a satellite image to Google Earth, which can be used as a selective key, (5) resample images, (6) mosaic pairs of images using both Mosaic Express and MosaicPro, and (7) use graphical modeling to detect binary changes in images. All functions were done using ERDAS IMAGINE software


Methods

Part 1: Image subsetting

Section 1: Subsetting with the use of an Inquire box
1.       A large satellite image was opened. An inquire box was first made around the desired area, the city of Eau Claire by right clicking on the image and selecting “Inquire Box.”
2.       The “create subset image” menu was opened from the “Subset and Chip” dropdown menu. An output folder was created and the inquire box extents were used for the subset as well.

3.       This was run and the a new file of the subset image of the City of Eau Claire was created.


Section 2: Subsetting with the use of an area of interest shape file
1.       A large satellite image was opened. A shapefile of the desired subset area, Eau Claire and Chippewa counties, was opened in the same layer.
2.       The two objects in the shapefile were activated by holding shift and clicking on them
3.   An area of interest was created around the selected objects by clicking “paste from selected object.”
4.   The area of interest was saved as a .aoi file. The .aoi file was used in a similar way as the inquire box was used in the previous method. “Create subset image” was selected but this time the extents of the .aoi were used to create the subset.


Part 2: Image fusion (pansharpen)

1.       Two maps were opened, each of the same temporal scale. One was multispectral with 30 m spatial resolution, while the other was panchromatic with 15 m spatial resolution.
2.       “Resolution merge” was selected from the Pan Sharpen menu. The panchromatic image was chosen as the high resolution image file and the multispectral file was chosen for the multispectral image file.


3.     The multiplicative method and nearest neighbor resampling technique were selected and the model was run. 




Part 3: Simple radiometric enhancement techniques

1.       “Haze reduction” was selected from the radiometric dropdown menu. A 2007 image of Eau Claire was selected for input and an output location and name were designated. 
2.    All defaults were accepted and the model was run.



Part 4: Linking image viewer to Google Earth

1.       A 2007 image of Eau Claire was added to the image viewer. The “Connect to Google” option was found via the help search. This opened the Google Earth Pro window.
2.       To link views between the Google Earth window and the Erdas Imagine window, “Match GE to View” was selected in the Erdas Imagine window.
3.     Views were then synced by clicking “Sync GE to View.
4.       At this point, the Google Earth view could be used as a selective interpretation key because of the markers like roads and places as well as very high resolution it provides.


Part 5: Resampling

1.       A 2011 image of Eau Claire was added to the image viewer. Metadata showed that the pixel size of this image was 30 m.
2.       “Resample pixel size” was opened from the “Spatial” dropdown menu. The 2011 image was used as the input file and an output file location and name were designated.
3.       Output cell size was changed from the existing 30 m in both the X Cell and Y Cell to 15 m in both. In addition, the “square cells” option was checked to ensure pixel sizes were square.
4.       Nearest neighbor was used for the resample method and the model was run.

5.       Steps 2 and 3 were repeated for the same image.
6.       Bilinear Interpolation was chosen for resample method instead of Nearest neighbor as in step 4. The model was run and all 3 images were compared.





Part 6: Image Mosaicking

1.       Two images captured in May 1995 by the Landsat Tm satellite were used for this task. One covers path 25, row 29 and the other covers path 26, row 29.
2.       The “Multiple” tab was opened and “Multiple Images in Virtual Mosaic” was selected.

3.      The “Raster Options” tab was opened to ensure that “Background Transparent” and “Fit to Frame” were checked.

4.       One of the two image files was selected and brought into the viewer.
5.       Steps 2 through 5 were repeated for the other image, which was added to the same layer.

Section 1: Image mosaic with the use of Mosaic Express
1.       “Mosaic Express” was selected from the “Mosaic” dropdown menu.
2.       Both image files were entered to be used as the data sources for the mosaic. Because order matters, the image on path 25 was added first so that it would be the top image.

3.       All subsequent defaults were accepted and an output location and name were designated. The model was run by clicking “finish.”

Section 2: Image mosaic with the use of MosaicPro
1.       “MosaicPro” was selected from the “Mosaic” dropdown menu.
2.     The “add images” icon was selected. The “Image Area Option” tab was opened and “Compute Active Area” was selected, activating the “Set” button. Because cropping or reducing the spatial extent was not necessary, the automatic “Active Area Options” were used.

3.       The image taken of path 25 was selected and added to the MosaicPro window.
4.       Steps 2 and 3 were repeated for the image taken of path 26. At this point, both images were together in the MosaicPro window.
5.       The “Color Correction” icon was selected and “Use Histogram Matching” was checked.

6.      In the Histogram Matching dialogue, the Matching Method was switched to “Overlap Areas” so that colors outside the overlapping areas would be preserved.
7.       The “Set Output Options” icon was selected. Because map projection, pixel size, or other parameters didn’t have to be changed, all defaults were accepted.
8.       The “Set Overlap Function” icon was selected and the default “Overlay” function was accepted.
9.       The “Process” menu was opened, an output location and name were designated, and the Mosaic was run.
  

Part 7: Binary change detection (image differencing)

Section 1: Creating a difference image
1.       Images of Eau Claire and four neighboring counties county in 1991 and 2011 were used to estimate and map brightness value changes in pixels.
2.       These images were synced.
3.       “Two Image Functions” was selected from the “Functions” dropdown menu of the “Raster” toolbar.
4.       In the menu, the 2011 image of Eau Claire was added as “Input File #1” and the 1991 image was added as “Input File #2.” An output location and name were designated.
5.       The additive operator was changed to the subtractive operator by choosing the (-) from the dropdown menu.

6.     Both input files were changed to only use layer 4 in order to simplify the change detection and the model was run.
7.      To find the upper and lower limits of change between the two images, information from the general metadata the mean and standard deviation. The mean plus 1.5* standard deviation provides a good estimate of these limits. The median was estimated from the histogram so that lower and upper limits of change could be calculated as shown.


Section 2: Mapping change pixels in difference image using spatial modeler
1.     This section involves using the difference image from the previous section to map only the areas that changed between 1990 and 2011. To do this, a model had to be created. A new viewer was opened and “Model Maker” was selected from the “Model Maker” dropdown menu in the “Toolbox” toolbar.
2.       The model maker was opened. Three tools were used to outline the model to be used in this section including placing a raster (top), placing function (middle), and connecting those parts (bottom).

3.     Double clicking on the raster objects allowed inputs to be selected. NIR bands from 1991 and 2011 were selected for the raster inputs.

4.    The function used the equation ΔBVijk = BVijk(1) – BVijk(2) + c where BVijk(1) and BVijk(2) are the two input rasters, c is a constant used so that there are positive brightness values. ΔBVijk is the change in pixel values. ΔBVijk will be the output raster.

5.    Again, the change threshold as calculated the same method as above, using the below numbers.


6.       Another model was created in which the difference image was the input, the function was of conditional function of either a pixel has a value of one if it is greater than the change threshold or a value of 0 if it is not.




7.       To view the resulting image more clearly, it was opened in ArcMap and overlayed on the image from 1991. Symbolization was changed as shown in the results.



 Results

Part 1: Image subsetting

Section 1: Subsetting with the use of an Inquire box



Figure 1: Image subset from a simple inquire box. 


Section 2: Subsetting with the use of an area of interest shape file


   
Figure 2: Image subset from an area of interest file. 
This allows more flexibility in the shape of the image subset.

 Part 2: Image fusion (pansharpen)


Figure 3: Panchromatic image (top) and multispectral image (bottom) used to createPansharpened image (left). Pansharpening allows the contrast and colors from a mulstispectral image to be combined with the better spatial resolution of the panchromatic image.


Part 3: Simple radiometric enhancement techniques


Figure 4: Haze reduction (right) significantly increased the contrast and sharpness of the original image (left). This is a very simple technique and useful when there are things like excess moisture in the atmosphere.


Part 4: Linking image viewer to Google Earth


Figure 5: Linked views of image file and Google Earth. Google Earth provided an extremely detailed selective key that includes road names, places, and very high spatial resolution.


Part 5: Resampling


Figure 6: Images and magnifications (bottom) comparing effects of resampling from the original image (left) by nearest neighbor (center) and bilinear interpolation (right). Bilinear interpolation resulted in smaller pixel size, causing differences in the image that could only be seen at high magnification.


Part 6: Image Mosaicking

Section 1: Using Mosaic Express


Figure 7: Mosaicked image from the automated Mosaic Express program. Because there is almost no user input, the output is not seamless.

Section 2: Using MosaicPro


Figure 8: Mosaicked image from MosaicPro. This uses extensive user input. Color correction was done by histogram matching to make the result almost seamless.


Part 7: Binary change detection (image differencing)

Figure 9: Changes between 1991 and 2011 in Eau Claire and four neighboring counties can be clearly seen using the method outlined.


    Sources

  1.   Earth Resources Observation and Science Center, United States Geological Survey (satellite images)
  2.      Price, M. (2014). Mastering ArcGIS 6th edition. McGraw Hill. (shapefile)

Lab 8: Spectral Signature Analysis & Resource Monitoring

Goal The goal of this lab is to become familiar with spectral reflectance signatures of different materials on and near the Earth's s...