Tuesday, October 29, 2024

Land Use/Land Cover Classification, Ground Truthing & Accuracy Assessment

This week we focused on land classification by creating our own classified map for a portion of Pascagoula, MS. We were provided an aerial image and had to create a polygon layer representing the different land use/land cover classifications using the Anderson system down to a level 2 classification. I accomplished this by creating a new featuring class and drawing polygons based on the pixels in the image. I used a few extra tools like the clipping tool to help ensure each area had it's own stand alone polygon with no overlapping shapes. I then noted which class the polygon was in the attribute table and visualized the different classes using a color code.



Behold: my land classification map in all it's glory!


To check the accuracy of our classification, I generated 30 random points across the area of the image. I copied the coordinates for each point and compared these to the locations in Google Maps. The resolution of the aerial imagery, combined with the street view feature, allowed for a better assessment of these points to see if my classification was correct. I calculate a simple accuracy of 80% of the points were classed correctly. 

The worst class was 43 Mixed Forest Land. All of these would have been better classed as 41 Deciduous Forest Land. When determining the classes originally, I decided to use the mixed class because I couldn't determine the type of trees. However, Google showed me mostly deciduous so all should really be reclassed as such. Many of the errors were the result of not drawing the boundary in the exact place, or lumping resources into the surrounding class (like forest into residential, or residential into commercial). I interpreted the instructions to not get too granular with the classifications, so my lumping proved to cause some errors!

In general, this was a really interesting exercise. When I started, I though it was going to be the most tedious thing ever - drawing all of these polygons! But the snapping tool made the entire process go really fast once I had a few polygons on the map. The randomly generated points for checking accuracy were interesting. It really became a bit of the luck of the draw if I ended up with a point that was inaccurate. I can see why folks debate how many points you need to check, and how to distribute these over the project area. And in the end, you may still miss something. But you can't check every pixel and coordinate!

Monday, October 28, 2024

Visual Interpretation

This week kicked off the first of my assignments for Remote Sensing and Aerial Photography. We conducted several exercises to get us thinking and practicing visual interpretation of aerial imagery. These included assessing the tone and texture of an image, identifying features using various methods, and comparing standard color imagery with false infrared color. Below are two of the maps I made.
In this exercise, we assessed an image to label various tones from very dark to very light, as well as various textures including very coarse to very fine. We selected areas representing each and created polygons to show this.

In this exercise, we inspected an image to find various features using their shape/size, shadow, pattern, or association to determine what the object is. We created points for the objects and labelled them to show what they are and how we determined this.


Saturday, October 5, 2024

Scale and Spatial Data Aggregation

In this week's lab, we looked at how scale and resolution can affect your data and analysis. 

In the first part of the lab, we looked at the same set of  hydrological data set at different scales. This showed us that the closer the scale, the higher the resolution, which means more data for any given area. For instance, the dataset with a scale of 1:1200 scale has the most detail for all features which included both polyline and polygon features that the other two maps do not have at all. The polylines and polygons had more detailed geometry than those from the dataset set at 1:100000.

Similarly, we saw in the second part of the lab that higher resolutions of raster data have more details. We were given a raster with 1m resolution of a mountainous area made from lidar data and processed it into various larger resolutions. As these numbers became bigger, the nuance of the landscape became more generalized. Moving from 1m resolution to 50m means that 50 pixels were combined into one larger pixel and averaged out. This resulted in a smoother raster with lower measurements of degrees. You can also see this in the imagery itself, going from a very detailed image at 1m to something that looks like it belongs in an 8bit video game by 50m.

In the last part of the lab, we assessed congressional districts for gerrymandering by assessing multipart districts and looking at the compactness of districts. Gerrymandering refers to drawing the boundaries of voting districts to achieve political advantage. In assessing the 14 multipart districts, I found 8 had justification for being multipart as they were all located along shorelines and the other parts represented nearby island. The other 6 to have no obvious reason for a multipart configuration. In looking at compactness, I use the Polsby-Popper test to calculate a rating based on the ratio of the area of the district to the area of a circle with the same perimeter. I found North Carolina's 12 had the lowest compactness score. North Carolina actually had 2/5 worst districts. 

The least compact district according to the Polsby-Popper score, North Carolina's 12th.


GIS Portfolio

To show off all I have learned during my GIS Graduate Certificate program, I created an online portfolio. Click here to check it out.  The ...