Thursday, June 21, 2018

Applications in GIS: Crime in DC

This week began looking at how GIS is used by The Department of Homeland Security and the advantages that gives them when it comes to protecting the public.  We started off by setting the environments in our ArcToolbox so that when we ran analysis tools, our data was stored in the correct coordinate system and in the correct folder.  This was a good place to start because it helped me stay organized throughout this lab. Next we identified where the police stations are in the DC area by geocoding the locations of the stations. This was a flashback to our intro class and was a good refresher.  Using the Address Locator and Google Maps, I was able to geocode the police stations and save the police station locations as a feature class.  Finally I imported the XY police station data onto my map and was ready to perform a crime analysis.

My first analysis step was to add a bar graph showing the different types of crime. I then used the Multiple Ring Buffer tool and performed a spatial join to join the crime offense locations to the police station locations.  Using the buffer tool, I was able to see that 97% of crime in DC occurred within 2 miles of a police station.  To better understand the spatial distribution of crimes as they relate to police station locations, I used the "join and relates" tool to calculate how many crimes are closest to each police station.  I symbolized these values using graduated symbols to easily allow the viewers to see which stations dealt with the most crime.

Next I used the Kernel Density spatial analyst tool to display a density map for homicides, sex abuse, and burglary.  Kernel Density calculates a magnitude per unit area from crime points using a kernel function (equation) to fit a smoothly tapered surface to each point.  The end result is a a smooth density map.  We can adjust the search radius size with a small radius producing a narrow scope result and a larger radius producing a more generalized but not necessarily more accurate density raster.  I chose 1500 sq. km as my search radius because I felt it was a good middle value that produced a representative map when I compared the density map with the actual point crime location data.  I symbolized the density rasters using a red color so they would stand out against the census population block data and road layers which were underneath.  I used a natural break classification with 5 classes ranging from Low to High crime areas.  I decided to label the low areas as "no color" so I didn't overwhelm the map and take away from the intended purpose, which was to identify the higher areas of crime in DC.  This lab was ironically useful because I'm actually flying up to DC tomorrow!  My finished maps can be found below.



Saturday, June 16, 2018

Applications in GIS: Hurricanes

This week we concluded the three part Natural Hazards series of lab assignments and focused on Hurricane Sandy.  Hurricane Sandy developed in the Carribean Sea and moved due north through Cuba and the Bahamas.  It then took an unusual turn to the Northwest toward the United States, making landfall on the New Jersey coast on October 22nd, 2012.  Sandy killed a total of 233 people throughout it's life cycle and caused 68.7 billion dollars in damage (USD).  This lab consisted of two parts, with the first constructing a map of the path of Sandy and the second classifying the damage on one street on the New Jersey Coast. 

By importing X,Y data from excel, during the first part of the lab I plotted points on a map showing the path of Hurricane Sandy and the associated sustained wind speeds (mph) and barometric pressure (mb).  Using the "Points to Line" tool in ArcMap, I connected the dots to create a continuous path from the genesis of the tropical depression in the Caribbean to its landfall as a Hurricane and finally the transition to a cold core post tropical cyclone in New Jersey. 

In part two of the lab, I prepared a post-storm damage analysis by categorizing the damage to 34 residence parcels on Fort Avenue in Ocean County, NJ.  I categorized the damage into five categories:

1. Destroyed
2. Major Damage
3. Minor Damage
4. Affected
5. No Damage

I also classified the wind and storm surge inundation damage to the structures.  I conducted the damage analysis by examining 2012 aerial orthophoto images from pre and post Sandy.  Using the "Swipe Tool" I was able to quickly compare what the structures looked like before Sandy made landfall to post landfall.  My Sandy path map and damage analysis maps can be seen below.





Sunday, June 10, 2018

Applications in GIS: Tsunami

This week we focused on creating evacuation zones in Japan for a Tsunami as well as the meltdown of the Fukushima-Daiichi Nuclear Power plant due to the Tsunami.  In March of 2011, a massive 9.0 earthquake struck off the northeast coast of Japan and caused a Tsunami to impact the Japanese coastline, with the Fukushima area especially hard hit.  In disaster situations, it is especially important to keep data files organized so they can be utilized efficiently. For this reason, we kept our data files in a geodatabase and created three Feature Datasets to organize the files.  The three Feature datasets were as follows:

1. Transportation
2. Damage_Assessment
3. Administration

To create the Nuclear Radiation Evacuation Zone surrounding the Fukushima nuclear power plant, we used the Multi Ring Buffer tool and changed the colors of the rings to match the proximity to the plant.  The closer areas, such as the 3, 7, and 15 mile rings faded from red to yellow, and the areas further from the plant, such as the 30, 40, and 50 mile rings were symbolized to fade from yellow to green.  This helped give the audience a quick visual of the evacuation zones.

The Tsunami Evacuation Zones were created by performing a Run-up analysis based on reports of maximum water height above sea level during the Tsunami. Model-builder was used to help expedite the process by running multiple tools all at one time.  Model builder is basically a way to write a script, without actually writing any code.  Via Model Builder, three Tsunami Hazard Evacuation Zones were created by running the con (Conditional) tool and converting the output rasters to polygons.  The polygons were then able to be symbolized from red, to orange, to yellow to represent the severity of the three zones.  My Japan Tsunami and Radiation Evacuation Zones map can be seen below.


Sunday, June 3, 2018

Applications in GIS: Lahars

This week's lab assignment focused on a unique kind of natural disaster known as a lahar.  A lahar is a destructive flow of concrete-like mud and debris which is carried down a volcano's river channels during an eruption.  These are especially hazardous because large lahars can travel much faster than humans can run, and will completely destroy any infrastructure in its path. For this reason, it's important to plan ahead and identify population areas which are in the path of a potential lahar.  During the lab assignment, we focused on Mount Hood, located in the US state of Oregon.  The focus of this week's lab was to identify populated areas in lahar hazard zones downstream of Mt.  Hood and specifically to identify towns, as well as infrastructure such as schools, roads and railroads that may be in danger.  To accomplish this, we used a variety of geoprocessing tools and especially focused on tools located within the Spatial Analyst extension. 

The first step was to use the "Mosaic to New Raster" tool to combine our two Digital Elevation Model (DEM) layers into one seamless layer.  Next we opened the Spatial Analyst Extension to use tools that would help us map out where a flow of water, and in this case a lahar would travel based on our elevation layers.   To start, we used the "Fill Tool" to fill in any irregularities and low elevation anomalies which would cause inaccuracies in later flow processes.  We then used the "Flow Direction Tool" to assign direction values to each pixel and the "Flow Accumulation Tool" to figure out how much flow was directed into each pixel.  Next we identified the pixels where at least 1% of the total number of pixels in the entire map flowed into.  Based on the 1% rule, these pixels could confidently be identified as a stream, and would be used to identify the lahar paths.  Once the streams were identified, the buffer tool was used to highlight a half mile hazard area around the streams.  This half mile hazard area represented the area where a lahar could impact during a volcanic eruption.  Census population block areas, schools, towns, roads, and railroads could then be overlayed on the map to show which areas would be in danger of a lahar during an eruption of Mt. Hood.  A map such as this one would be valuable for city and state governments to use to plan for a catastrophic event such as this one. 

I thought this was an especially interesting lab because I've always been fascinated by natural disasters.  My parents live in Tacoma, WA and Mt. Rainier can be seen from the entrance of their neighborhood.  While their house is not in danger from a Mt. Rainier lahar because they're on a hill, there are a lot of people who wouldn't be so lucky.  I visited Mt. Saint Helens last summer and was amazed at the destructive power of the lahars in that area.  You can still see how the river channels were carved out and widened downstream from Mt. St. Helens.  It's a reminder of the awesome power of the earth and it's processes.  My final Mt. Hood Lahar Hazards map can be found below.




Wednesday, May 2, 2018

Final Project: Poverty and Obesity

For the last class assignment, we were given the task to create a map which displays two sets of related data by displaying them using thematic mapping techniques, also known as a bivariate map.  I decided to use ArcMap to create a map displaying the U.S. obesity and adult poverty rates in 2016.  I decided to use a choropleth map to display the obesity rates by state, and because the data is unipolar, I chose a blue sequential color scheme.  States with a lighter blue color have lower obesity rates, and the higher obesity rates are represented with a darker blue color.  The obesity data was retrieved from the Centers of Disease Control and was already standardized as a percentage of population in each state.  It is important for choropleth map data to be standardized, because the enumeration units (in this case states) typically vary in size and using raw data would be misleading.  

I represented the poverty rate data with graduated symbols, and in this case circles.  The smaller circles represent a lower poverty rate and the larger circles represent higher poverty rates. I obtained the poverty data from the U.S. Census Bureau and just as with the obesity data, this data was standardized as a percentage of each state population as well.  I decided to use green proportional symbols because they stood out against the blue choropleth background while allowing the audience to be able to read the poverty rate labels inside the circles.  I classed both sets of data using the Natural Break (Jenks) method and used five classes.  I decided to use the Natural Break method because it did a good job identifying the similar values and grouping those together, therefore effectively highlighting areas of high and low poverty/obesity.  For example, it is quite easy to see that there is a corridor of high poverty and obesity in the southern US, extending northeastward into Appalachia.  To enable the US state outlines and title to stand out against the gray background, I added a drop shadow effect using Adobe Illustrator.  I decided to use a darker gray background with lighter gray text boxes for the subtext and legends in order to bring those text boxes to the foreground, in accordance with visual hierarchy.  

At first glance, it is clear that a positive correlation exists between obesity and poverty in the United States.  By visually scanning the map, it is evident that some of the poorest states also happen to be some of the most obese states.  In most cases the darker blue states, which signify the states with higher obesity rates, also have larger graduated circles on them symbolizing higher rates of poverty.  Like with most data sets, there are exceptions of course such as Florida who has a lower obesity rate, however still has a relatively high poverty rate. As mentioned above, it is easy to see that the corridor of highest poverty and obesity rates in the country stretches from Louisiana northeastward to West Virginia.  On the flip side, the intermountain west as well as areas in the northeast such as Massachusetts have lower obesity and poverty levels. I believe the map effectively meets the objective of visualizing the positive correlation between obesity and poverty.  My map can be found below.


With this final assignment, the semester has come to a close.  I really have enjoyed learning techniques and gaining knowledge to enhance my map making skills.  I construct maps and figures at work for some of our projects, and I've found myself already improving my maps using knowledge gained from this class.  So with that I'll say so long Computer Cartography, it's been an adventure!

Sunday, April 15, 2018

Module 12: Google Earth

The last lesson and lab for this Cartography class focused on the future of mapping in our data rich society, and where we may be heading in the future. We were introduced to the term, Neo Cartographer, which is defined as a person who doesn't come from a traditional mapping background but instead uses advances in technology to create their own unique map for their intended audience.  Because of the rapid advances in technology, there has also been a sharp increase in the amount of civilians with the ability to provide spatial data.  With these changes occurring, new areas of research have been created to take advantage of this large amount of new spatial data.  The following research areas were discussed in this week's lecture:

  • Volunteered Geographic Information (VGI)
  • Public Participation GIS (PPGIS)
  • Collaborative GIS
  • Geotargeting
  • Cloud based GIS
Volunteered Geographic Information, or VGI is defined as digital spatial data that is collected by citizens instead of formal geospatial users.  There are many incentives to volunteer your data including letting people know where a certain service may be located, or to find services yourself such as nearby restaurants.  In this case, people act as sensors so the data footprint is very large however, we do have to ensure that the integrity of the data is good.  A few examples of VGI include Facebook, WikiMapia, OpenStreetMap, and Twitter.  

Public Participation GIS (PPGIS) research is centered around getting the public involved with data creation.   Examples of PPGIS include Web Mapping, Blog and Video sharing, and social networking.  Geo Collaboration involves a group of people working in GIS to accomplish a common goal.  Geo Collaboration characteristics include the fact that it facilitates dialog, allows for private work, allows saved/shared sessions, accounts for group behavior, and draws the attention of the audience to certain places/objects. Geotargeting is the method for determining the physical location of a user via IP address, GPS receiver in phone, or using volunteered information. The incentive for obtaining the address is so users can be targeted by adds personally suited to them. Finally one last area of research is Cloud based GIS.  The Cloud has historically been hard to define, but in short it takes tasks that would normally be handled by individual computers and lets huge computing centers connected over the internet complete them as web-based services.  An example of a cloud based service would be email or a firewall.  An example of a cloud based GIS service that we have used throughout UWF's certificate program is ArcGIS Online offered by ESRI. 

In this week's lab, we used the interactive mapping service, Google Earth to share some of our data layers which were created using ArcMap.  In the real world, it would be advantageous to be able to share any data that we create using ArcMap, in Google Earth because many people do not have ArcMap on their personal computers however most people have and are familiar with Google Earth. In this assignment, we converted south Florida population, city, and surface water data created in Module 10 to KMZ files.  Using the "Map to KML" and "Layer to KML" tools in ArcMap, our south Florida map and population attributes were converted to .kmz files and could be opened and viewed in Google Earth.  Once my data was opened in Google Earth, I recorded a tour of the major cities in south Florida with the 3-D mapping layer on.  By playing the tour, users will get to see the general location of the major south Florida cities, as well as a zoomed in 3-D view of each city skyline.  This is a really useful application and now it's gotten me excited to explore some other cities around the world!  Below is a screenshot of my south Florida dot density population map opened as a KMZ file in Google Earth.

Sunday, April 8, 2018

Module 11: 3D Mapping

This week we dove into the world of 3-D Mapping using the ESRI software ArcScene and ArcGlobe. There are a few distinct advantages to using 3-D mapping over 2-D.  We as humans see the world naturally in three dimensions so creating our maps to display data this same way allows an even greater understanding of the data being depicted.  3-D views allow users to see vertically stacked content such as the floors of a high rise building or the depth of a well below the ground surface.  Displaying data using a 3-D map invites imagination and understanding by showing the data in a way that the user may not have thought of before.  Along with the pros, there are certain cons that have to be taken into account when creating or using a 3-D map.  These cons include perspective and distortion problems based on the angle at which the map is oriented.  Content can also be hidden behind 3-D features and it is easy to get disoriented while moving around the map.  Also, a 3-D map can be slow to load because of the large amount of data.

The lab assignment this week consisted of three sections.  The first section was an ESRI online training titled "3D Visualization Techniues."  The training consisted of multiple short videos and the following five exercises using ArcScene.

1. Set base heights for raster and feature data
2. Set Vertical Exageration
3. Set Illumination and background color
4. Extrude buildings and wells
5. Extrude parcel values

These five exercises provided us with the basic knowledge to work with 3-D data.  First of all, we learned that there are two types of data used for 3D Visualization.  They are 1. Surface and 2. Feature.  3D Surfaces use rasters, terrain datasets, and Triangulated Irregular Networks (TIN) to model phenonmenon which vary continuously across and area.  3-D features represent the entities with distinct boundaries such as buildings or telephone poles that are placed on or below the surfaces. 3D data uses specified values in the z field, while 2D does not.  Often, the z field represents elevation, but it can also represent other continuous data such as soil pH or temperature.  We can tell that an object is 3D by looking at the shape field in the attribute table.  If the shape field ends with a "Z" then that object is 3D.  There are other ways to enhance 3D data such as vertical exaggeration, sun angle and shadows, background illumination/color, and environmental factors such as fog just to name a few.

The second part of the lab allowed us to convert 2D data depicting polygons for buildings in Boston to 3D data.  We did this by combining the elevation data from a raster layer to the polygon building layer. Using the Create Random Points tool, we calculated a MEAN Z field that we could use to extrude the building polygons upward with, therefore creating 3D buildings.  Finally we viewed the data in Google Earth by converting it to a KMZ file.

The third part of the lab had us compare Charles Minard's famous 2D map depicting Napoleon's ill fated Russia campaign of 1812 with a 3D version created in City Engine.  There were certain aspects of the map that stood out better in 3D, such as the length of time that the army stayed in one spot, and how the temperature drop correlated closely with the loss of life during the retreat.

Below is a screenshot showing an example of 3D features from the 5th ESRI exercise "Extrude parcel values."  In this case, there are parcels of land with Z values representing the monetary value of each parcel.  The taller the parcel, the more expensive it is.  This was done using by clicking the Extrusion tab and making changes there.