Monday, November 13, 2017

Module 10: Supervised Classification

In this weeks lab we learned how perform a supervised classification in ERDAS Imagine.  We were introduced to two tools (Polygon and Grow) which can be used to create spectral signatures.  Spectral signatures are used in a supervised classification in order to perform the most accurate classification as possible. I used the Grow tool in order to create my spectral signatures in an image showing Germantown, Maryland.  I examined histograms and mean plots in order to identify the best band combination to accurately show the land use classifications in the supervised classification output.  I then used the Maximum Likelihood tool to identify areas where the classification may not be accurate, thus more spectral signatures may need to be added.  Finally I merged the like classes to narrow down the class number to eight and calculated the area of each class in units of acreage.  My final output map can be seen below.

Tuesday, November 7, 2017

Module 9: Unsupervised Classification

In this weeks lab we learned how to perform an unsupervised classification in both ArcMap and ERDAS Imagine.  An unsupervised classification uses an algorithm to differentiate an image into separate land cover classifications.  In this type of classification, we must then name the categories by comparing the classified image to the original.

During the lab assignment, we classified an aerial image of the University of West Florida using ERDAS.  In order to begin the unsupervised classification, we clicked the the Unsupervised button found in the raster tab.  The output image looked similar to the original but with a lower resolution because we analyzed the pixels in groups of two.  We then reclassified the image by zooming into the image and changing the pixel color of five main types of features.  Using the Recode tool also found in the raster tab, we merged each group of classes into one class representing one feature.  We then had an image classified into the following five categories:

  1. Trees
  2. Grass
  3. Buildings
  4. Shadows
  5. Mixed
In ERDAS, I added an area column which calculated the area of each classification  I finally calculated the percentage of permeable and impermeable surfaces in the image.  My final output can be found below.  

Tuesday, October 31, 2017

Module 8: Thermal and Multispectral Analysis

This weeks lab focused on how to interpret thermal imagery using composite multispectral imagery in both ArcMap and ERDAS Imagine.  The first exercise introduced us to Wien's displacement law which tells us that objects at higher temperatures emit more radiation and as the temperature increases, wavelength emitted decreases.  In the second exercise, I created an 8 band composite image in ArcMap using the Composite Band Tool.  By opening the image properties, it's possible to examine the image as individual bands.  We especially focused on the sixth band because that is the thermal infrared band.  I then examined the same image in ERDAS using Layer Stack, which ended up being informative but more time consuming than using ArcMap.  In exercise three we examined an image from the Pensacola, FL area in two view frames. It was easy to see the differences in temperature based on land cover.  Areas close to the coast had darker pixels and were cooler than inland and urban areas.

In exercise four we were asked to identify a feature of interest using the thermal band, and emphasize it by changing the band combination.  I examined the ETMcomposite.img file and noticed a thin line of higher pixel values crossing the river.  I hypothesized that this was a bridge and confirmed this by examining the true color image.  The concrete/asphalt of a bridge would likely emit more thermal radiation than the surrounding waters, thus it makes sense that it would have a higher thermal representation on an IR image.  I used the inquire cursor in ERDAS Imagine to find the coordinates of the bridge, and displayed the image with the band combination Red = 7, Green = 6, Blue = 8 in order to help the bridge stand out from it's surroundings. The Bridge can be seen in the image below.

Tuesday, October 24, 2017

Module 7: Multispectral Analysis

This weeks lab really dove into the heart of multispectral analysis and gave me the tools needed to examine and interpret an image at a high level of detail. I looked at image histograms in both ERDAS Imagine and ArcMap in order to gain insight into how features such as brightness values and pixel frequencies determine the histogram's shape.  After I had a good understanding of how a histogram's shape was determined, it was much easier to correlate a specific spike in the histogram diagram to a certain feature in the image.  For example, a large spike on the right side of a histogram would correlate to a large amount of bright pixels in the image.  I used a gray scale image in order to identify the three features listed in the lab because the gray scale helps distinguish the light and dark areas from each other.  After I identified the features, I used different band combinations in order to highlight the features.  Changing the band combinations in a multispectral image made me realize how most images that we see in everyday life are made up of multiple layers, with each layer contributing to the image in it's own way.  My three maps with the sought for features can be found below. 

Sunday, October 15, 2017

Module 6: Spatial Enhancement

In this weeks lab I downloaded satellite data from a source online (GLOVIS) and performed multiple spatial enhancement techniques in both ArcMap and ERDAS Imagine in order to better interpret the data.  The data from a Landsat 7 Image had uniform white stripes of null data so the goal was to blend these stripes into the background, while retaining as much detail as possible.  It was cool to see how ArcMap and Imagine work together to enhance the image.

The first enhancement performed was done in ERDAS Imagine.  I performed both a 3x3 Low Pass and 3x3 High Pass in order to set the stage for later enhancements.  Each filter grouped cells into 3x3 groups of nine.  The Low Pass gave a smoothing effect, while the High Pass enhanced the edges in the image.  Next I switched to ArcMap and opened the Focal Statistics tool. I used the Mean Stats and Range filters which have a similar effect as the low and high passes respectively, but in 7x7 blocks.

I then switched back to ERDAS Imagine to perform the rest of my image enhancements. I first performed a Fourier Transformation to blend the white stripes into the background image.  This turned the white stripes more grey and blended them.  Next I performed a 3x3 sharpen and sharpen 2.  This did sharpen the edges slightly.  Finally, my last filter applied was a 7x7 low pass.  I did this to blend the lines into the image even more.  This resulted in a slightly darker image, but enough detail was retained to be useful.  My final image can be seen below.

Tuesday, October 3, 2017

Mod 5a: EMR and ERDAS

This weeks lecture focused on Electromagnetic Radiation (EMR) and the lab introduced the basics of geoprocessing with the program ERDAS Imagine.  Learning how EMR waves interact with our atmosphere and remote sensing sensors really helped me understand how maps such as the false infrared are produced.  I think this background information will be valuable in the future as I continue to analyze various maps. 

In this weeks lab, I learned how to navigate through the program ERDAS Imagine as well as how to use the viewer.  After getting the basics down, I then got down to business dealing with image data. First I imported into ERDAS a raster image depicting land classes in a section of forest in Washington State, USA.  I then altered the colors of the map to make certain areas easier to depict.  Finally I used the Inquire Box to export a smaller piece of the map as an image file. To make the final deliverable product, I imported the image file into ArcMap and added the essential map elements.  I'm looking forward to learning more ERDAS Imagine features!  My map can be seen below.

Sunday, September 24, 2017

Module 4 - Ground Truthing

This week, we learned about ground truthing techniques which are used to verify classification schemes, also known as accuracy assessment.  Most ground truthing is done in-situ, or in the field in person.  This would be the most accurate way to ground truth, but this isn't always practical so supplemental data such as high resolution aerial imagery or google street view can be used to verify classifications.  In this lab, I used google earth street view to "travel" to each of my 30 sample points and verify whether the original land use/land cover classification was correct.  If it was not correct then I identified the correct classification.  Out of the 30 sample points which I classified, 24 of them were verified as correct by using street view.  My ground truthing map with 30 sample points can be found below.