Sunday, October 27, 2019

Improving Crew Resource Management

 Improving CRM in UAS operations

Crew Resource Management (CRM) is a method in aviation that increases the situational awareness and flexibility of an aircraft flight crew. This method has been shown to reduce errors and increase the safety of an aircraft in flight. Even though the flight crew has been removed from the aircraft in UAS operations CRM remains important, and I would argue is more important due to this fact. If the aircraft encounters a problem in flight the crew is not on board to react to the situation, and often must react from a distance that can distort the situation. For this reason the flight crew must preemptively respond to emergencies, and maintain effective communication and cooperation during the flight.

Tracking Aircraft use

The purposeful integration of CRM into UAS operations has begun with our C-Astral Bramor PPX aircraft. The complexity and frequent use of this aircraft made it a natural and important place to begin. The first step was to create a way to track aircraft use. Figure 1 shows the sheet created to track aircraft use. It is a very simple checkout sheet that would commonly be seen in any operation tracking material usage.
Figure 1: Checkout sheet
This simple sheet enables us to quickly determine when the vehicle was last flown, what sensor is equipped, where the vehicle was flown, and who is responsible for the aircraft at that time. This basic level of book keeping is the first step to enabling repeat flights and allows an individual to officially accept responsibility for the aircraft.

Standardized Metadata forms

The second, and larger, step to enabling repeat flights was to standardize the metadata collected and recorded during flights. Prior to the creation of a standardized form operators had a general idea of what they needed to collect, but it took a few flights for new operators to figure out what was and was not important. As operations expanded into multiple multirotors, multiple crews, the Bramor PPX (also with multiple crews) ensuring that each crew knew the correct data became very cumbersome. Figure 2 shows the sheet that was created.

Figure 2: Metadata form
This sheet was created as a text file so that it could be opened on any computer and without risk of format changes between versioning of software. This file provides little ambiguity on what needs to be collected, and separates the information into sections. Each of these sections provides information for different purposes. The "general" section provides the information required to replicate the flight and inform the crew of battery usage for cycle tracking. The "flight information" and "geolocating" sections provide information required for data processing and logbook entries. Knowing data collection start and end times, along with the location, allows the data analysts to access the CORS network and increase the accuracy of the PPK GPS equipped on the Bramor PPX. Knowing the coordinate system being used, if a PPK GPS is not in use, allows the analysts to process the EXIF data in images appropriately. The "weather" section is where the flight crew will enter the METAR from the nearest airport. METARs are put out every hour, and provide more local weather than many weather forecasting services. The "crew" section allows the recording of the core crew present for the operations, and allows for more accurate inquiry of operational details (asking the sensor operator about sensor specific questions or the PIC for flight specific questions). The "notes" section at the end is very important. This section allows the crew to note anything, positive or negative, unique that occurred during the flight and can provide additional context not present in the previous sections, and allows any member of the flight crew to have their opinion officially written down.

Crew formation

The Bramor PPX crew was made a standard three people with additional visual observers optional. Creating a standard crew prevented the need to constantly train new crew members, and allowed experience with the vehicle to be rapidly gained between the three crew-members. This experience allowed the crew to understand which portions of the operation needed improvement and which parts just needed additional training. For example, parachute packing and battery charging were discovered to be portions of the operation that required training to perform adequately. However, another part of the operation did need improvement, the checklists.

Checklist Creation

During repeat operations with the same crew we discovered that the checklist provided needed a few edits and clarifications. We began writing these in the margins and jumping around the checklist. After one flight of adding no edits to the checklist we began a full revision of the checklist. The biggest contributor to a full revision was the ordering of the sub-checklists in the checklist. Figure 3 shows the original and new checklist orders.
Figure 3: Checklist ordering
 
We believed that the original order had the flight crew moving back and forth too often and prevented an operational flow from developing. One benefit of the original checklist was that each item was given a number, and the numbering continued throughout the checklist. The new checklist includes all items from the original checklist, but now focuses on the stage of the operation being performed. The new checklist is shown in figure 4.

Figure 4: New Checklist
To the right of each heading a series of numbers can be seen, these are where the steps were located in the original checklist. All items from the original checklist are present in the new checklist, and items that had been written into the margins are also present. Not present in this checklist is a checklist for the sensor operator. This is currently in progress.

Crew Roles and Responsibilities

During checklist creation crew roles and responsibilities were defined. Three crew roles were defined pilot in command (PIC), sensor operator, and first officer (FO). PIC is responsible for the safe operation of the aircraft from arriving at the site to leaving the site. The sensor operator (Sensor) is responsible for ensuring successful operation of the vehicle payload. The FO is responsible for assisting both the PIC and Sensor in their operations pre-flight and act as an interface between visual observers (VOs) and the PIC in flight. The purpose of UAS operations is data collection, and this data collection must be conducted safely; for this reason the PIC and Sensor share command of the vehicle. During setup the FO will assist both the PIC and Sensor in their preparation for flight. Before vehicle launch two actions take place. First the PIC will check with the Sensor to ensure that the payload is ready for flight, the vehicle cannot be launched until the payload is ready. Second the PIC and Sensor both have an opportunity to cancel the flight if they are uncomfortable with sensor operation or flight safety, during this step the FO is encouraged to voice any concerns that they see. If any member of the flight crew brings up a portion of the checklist that they feel needs to be reviewed, then that portion of the checklist will be reviewed to ensure that the checklist has been completed adequately. Once the vehicle is in flight the FO's primary role is as an interface between the PIC and VOs. During flight the VOs will be updating the FO on which VO has visual contact with the vehicle, and if the PIC loses visual contact the FO will report that visual contact is maintained through a VO or if visual contact is lost entirely. If continuing the flight is determined to be unsafe then the PIC has final say in ending the flight.

The effect on operations

During our first operations it would take the flight crew up to 90 minutes to prepare for a flight, once reaching the site. After implementing the methods described above the flight crew can prepare for a flight in 15-20 minutes. This dramatic improvement is mostly due to changing the flight crew from "whoever is available and wants to fly" to "these are the only people who fly this vehicle", and the reorganization of the checklist. While the time from site arrival to vehicle launch has decreased the safety of the flight and data collection focus have not been compromised.


Friday, October 4, 2019

Assessing the effectiveness of a controlled burn

Controlled Burn at Doak Property

Multiple UAS operations are complicated and require a good amount of planning and coordination. A controlled burn at Purdue's Doak Field, shown in figure 1. The areas being burned are outlined in orange.
Figure 1: Burn areas
The goal for us was to collect a pre and post burn orthomosaic and constant EO/IR surveillance of the burn. In order to accomplish these goals we brought a Bramor PPX with an Altum multispectral sensor, and a DJI Matrice M600 equipped with a Zenmuse XT2, Sony Alpha A6000, and a PPK GPS. Menet aero assisted us with surveillance by providing a Bramor C4EYE platform and operators.

Having these three vehicles in the same operations area required a good amount of coordination between operators. Both sets of operators arrived at the burn site prior to the burn, and began to set up the operations area. Figure 2 shows the operations area, and will be further described below.
Figure 2: Operations area
Figure 2 shows our operations area set up in a small clearing in one of the fields. We created this clearing, using a weed eater, in a location that provided good take off (shown in blue) and recovery options (shown in green), in figure 3.
Figure 3: Operations area justification
As can be seen in figure 3 our positioning allowed us to take advantage of a driving path providing us an incredible length of clear area in a thin strip for takeoff and climb. This location also provided a large area of tall grass for recovery nearby so we would not have to go far to recover the vehicle. This close proximity proved beneficial during flights, and that will be discussed later. Most importantly this area kept us a good distance away from the fires that were being set. The winds were calm and variable, but generally provided a tailwind on takeoff. While a tailwind on takeoff is undesirable, the conditions in the field made it the best option. Had the winds been stronger we would have had to adjust the location of our operations area.

The first flight of the day took place about two hours before the burns were set to take place, and the Bramor PPX was equipped with an Altum Multispectral sensor and launched, flight plan shown in figure 4. The PPK GPS system tied to the Altum sensor was instrumental in the mission because the PPK data would allow the images to be processed accurately enough to create a comparison like the following: Story Map.
Figure 4: Bramor PPX flight plan
This flight plan provides 80% overlap and 80% sidelap for the images gathered, and at 400 feet provides a resolution of 5.2cm/px. This mission had a duration of a little over 10 minutes and has very little in the way of planning challenges. After the vehicle landed and was recovered we discovered that no images had been taken during the mission. When this was discovered I tasked a second flight crew with flying the area using a Matrice M600. This vehicle was equipped with a Sony Alpha a6000 camera tied to a PPK GPS, and would allow RGB data to be gathered with acceptable accuracy. This separate flight crew prepped their vehicle, created their flight plan, and conducted their mission as the Bramor PPX flight crew troubleshot the Altum sensor. Menet Aero provided assistance during this process and together we were able to get the problem solved. Once the problem was solve the Bramor PPX flight crew re-prepped the vehicle for flight, and waited for the M600 crew to return so the airspace would be free. The second Bramor PPX flight was able to collect data without a problem, but experienced frequent loss of communication issues. These issues were solved by mounting the Combox to a tall pole, and having a member of the flight crew follow the pilot in command. When the vehicle was recovered the controlled burns were about to begin. Due to the close proximity of the landing site, the inclusion of multiple crew members, and inclusion of data analysis capabilities in the flight planning we were able to quickly react to a sensor failure and successfully collect the required data.

During the Burn

As the controlled burns began airspace and air traffic considerations had to be made. Menet Aero was tasked with gathering persistent EO and IR data. To accompish this they brought a Bramor C4EYE. Purdue's flight crews were tasked with gathering spot EO and IR data, and brought a Matrice M600 equipped with a Zenmuse XT2. Prior to the first Bramor PPX flight it was determined that the C4EYE would fly at an altitude of 400 feet AGL and the M600 would fly at 200 feet AGL. These altitude ceilings were chosen in order to reduce the likelihood of traffic concerns, and increase the freedom of lateral movement for both vehicles during their flights.

Both the C4EYE and M600 were tasked with gathering EO and IR data during the controlled burns, but only the C4EYE was capable of staying aloft throughout all of the burns. With this in mind why bring the M600 when it creates traffic concerns? While the sensors on each vehicle gathered the same kinds of data the payloads were made with different intentions in mind. C4EYE video, shown in figure 5, is intended to provide surveillance capabilities and is capable of switching between EO and IR in order to more accurately track objects. The Zenmuse XT2 video, shown in figure 6 and 7, is intended to compare EO and IR side by side and records events in both spectrum.
Figure 5: C4EYE video

Figure 6: XT2 EO
Figure 7: XT2 IR
Figures 6 and 7 show the video collected by the M600's XT2 sensor over the same area, and demonstrates the benefit that IR data can provide. In figure 6 it is almost impossible to make out what is going on through the smoke, but in figure 7 it is clear as day where the fires are. The video from the C4EYE, figure 5, also shows this capability but because of it's higher altitude it can see more of the area and provide a large scale picture of the fire situation. Earlier it was mentioned that the C4EYE was capable of providing target tracking and GPS information on objects within it's field of view. Figures 8 and 9 show this.
Figure 8: Target
Figure 9: Target GPS information
In figure 8 we can see a person standing just above the targeting square in the image, and in figure 9 we can see the field of view of the sensor (the purple box) and the GPS location of the target. In this scenario the person is responsible for starting the fires in the controlled burn, and they were knowledgeable of exactly where the fires are. In another situation this information could be vital to providing actionable real time information to firefighters on the ground as the fires approach them. If the C4EYE provides so much more information than the M600 the question of why use both resurfaces. These two vehicles represent two different data requirements by different organizations. The M600 provides side by side data of the fires and how they spread with a more agile platform, but lacks the duration and target tracking capabilities of the C4EYE. This data is useful for situations that do not require immediate actionable information and provide very good data sets for later analysis. The C4EYE on the other hand sacrifices the amount of data collected to allow for a longer flight time, and the capability to provide real time actionable information in situations that require it.

Post Burn

As the burn began to near completion the Bramor PPX was prepared for a post burn data collection flight. After the commencement of the controlled burn the C4EYE was landed in order to open up the airspace at 400 feet. In light of the problem suffered during the first pre-burn flight led us to land the M600 and prepare it for a potential PPK mapping mission. We were confident that we had the Bramor PPX issue resolved, but we wanted to be sure we got some sort of post burn data for analysis in a worst case scenario. In order to recreate the pre-burn data-set as accurately as possible the pre-burn flight plan was unchanged and uploaded to the vehicle. The flight and recovery was uneventful and the data collection was successful. For a discussion of the data collection see the following link: Zach Miller's Blog.

Final Thoughts

The flight operations performed during the controlled burns at the Doak property were very successful and yielded a vast amount of good data. This data collection was possible due to constant coordination and communication that took place during the operations between the Purdue, Menet Aero, and burn teams. When reviewing the successes and improvement areas for the Purdue teams the following come to mind. On the positive side having a dedicated flight crew per vehicle, instead of one crew flying different vehicles during the different parts of the burn, allowed our teams to be more agile and respond to unexpected events. While the two flight crews allowed increased agility, two pieces of equipment played a larger role in increasing agility; a laptop and a generator. It sounds simple that a laptop would be taken during a flight it proved essential. After the first pre-burn flight we were able to quickly identify that the data was missing, and begin getting the vehicle prepped for another flight. This laptop also had an external hard drive which allowed easy data archiving after successful flights, and made transferring data to servers after the flight very quick. We were also able to quickly clear SD cards for their next flight, instead of swapping SD cards and attempting to remember which one had what data. The generator we brought was small, but allowed us to charge the M600 batteries after they were depleted in flight. This allowed us to get four flights out of three battery sets over the time of the burn.

There were two areas that I saw where improvement was needed; adequate preparation for full day missions, and regular flight crews with semi-rigid roles. Adequate preparation for full day missions was something that we attempted to do but missed a few key things. We brought a cooler with food and drinks, but given the weather (~90F and sunny) we did no have nearly enough. We also overlooked bringing a table, chairs, or a pop up shelter. Overlooking these items led us to using the bed of a pickup truck as a table and the cab for chairs. Thankfully Menet Aero, who has more experience in these types of operations, was able to provide us with water and a good example of what to emulate equipment wise. The second area of improvement, regular flight crews with semi-rigid roles, was again something that we thought we had but quickly broke down with the Bramor PPX. The M600 flight crew was smaller, two people, and had more experience with their vehicle so they were able to quickly get the vehicle prepped and in the air. Prior to this event the Bramor PPX was not flown as often due to the large area needed to make flying the vehicle "make sense" (justifying flying the Bramor PPX over the M600 for a 2 minute flight is difficult). The flight crew was also made up of three or four people with varying levels of experience with the platform, the PIC for the vehicle had remained constant but visual observers and other crew roles had not. This led to set up taking longer than necessary and a small amount of bickering during the checklist process. The flights were still conducted safely and effectively but the issues listed above really presented the issues with the more casual "who is available and wants to do this" attitude that we had towards filling in the flight crew. As the PIC for the Bramor PPX I looked at this situation and decided to begin to create a more constant flight crew, and begin to add rigidity to the crew roles mirroring manned aviation. This process will be covered in a future blog post.



Tuesday, September 24, 2019

Investigating Infrared with Loc8

During my investigations into Search and Rescue operations infrared sensors were mentioned as an alternative to RGB cameras. The contrast provided by temperature differences, like that of a person juxtaposed to ambient temperature, makes it easier to locate missing persons. I believe that software like Loc8 could help thermal imaging reach its potential in SAR operations. Loc8 was chosen due to my familiarity with it, and the ability to choose specific colors. Since thermal imagery shows temperature in false color based on the output settings desired, a specific temperature color could be selected and searched for. In this case a "white hot" scale was used, where the hottest item is shown in bright white and the coldest in dark black. While collecting images with a FLIR E95 I noticed that the temperature scale was changing with each image. Figures 1 and 2 below show an example of this.
Figure 1: Computer ambient
Figure 2: Computer with hot mug
Figure 1 shows a thermal image of a computer in sleep mode with only ambient temperature surrounding it. On the right side of the image it can be seen that the temperature scale goes from 66.8 at the coldest to 74.2 at the hottest. In this image the computer is the hottest object in the image and is white. In figure 2 I have introduced a mug of hot tea into the frame. The temperature scale has shifted to 67.6 on the low end and 123 at the high end. In this image the tea mug is the hottest object and is white. This means that the color used to find 70F in figure 1 would find a much higher temperature in figure 2, making Loc8 unusable for this scenario.

In order to allow Loc8 to properly function in this situation the images need to be put on the same temperature scale regardless of object temperatures. I found two ways to do this. The first option is to process images through the FLIR tools software after data collection. This allows you to see the temperature range of your data set and adjust the scale accordingly. The second option I found was to adjust the scale manually on the sensor. This would allow a preset color range to be used on any data set using this preset range. There is the risk that an object will be outside of this temperature range though. I took the second approach for this initial investigation, and set the range to between 50F and 121F. I then created a temperature color range of ~74F to ~100F in Loc8 using images, figures 3 and 4, with the hottest temperature displayed.

Figure 3: Low end color
Figure 4: High end color
I sampled the images near the point called out as the hottest points in each image, and used those samples for the color range. I then tried to automatically detect the residual heat from a hand print, and then asked someone to walk around while I gathered images in a hangar. Figure 5 shows the hand print and figures 6 and 7 shows the locating of the person in the hangar.
Figure 5: Detected hand print
Figure 6: Detected person 1
Figure 7: Detected person 2
Figure 5 shows that this software has potential to detect residual heat from objects, but also shows that the color range on the right has been detected. I was able to fix this in the other images by adjusting the settings in Loc8. Figures 6 and 7 show a person walking around the aircraft in the hangar. Figure 6 has highlighted the person's face above a wing, but that would have likely been visible without using Loc8.


In the future I hope to make my temperature sampling more accurate to hone in more accurately on certain temperatures.




Thursday, August 29, 2019

AT209 First weeks

This semester I was assigned to teach AT209 Civilian Unmanned Systems. I was assigned this fairly late in the summer, one week before classes started, and was very fortunate to have the assistance of Dr. Hupy and Zach Miller for course planning. AT209 is the third course in the UAS major and the first course in the UAS minor. The objectives for this course are as follows:
---------------------------------------------------------------------------------------------------------------------------
     This course is about utilizing Unmanned Aerial Systems (UAS) as an applied tool in civilian market-based applications. Although piloting, mission planning, and crew resource management fundamentals will be integrated into the curriculum, the key focus will be on proper data collection, processing, and analysis. Safety and ‘drone ethics’ will also be stressed throughout the course. The course is not designed to make you an expert in UAS, but for you to have a strong foundation upon which to pursue UAS applications in the workplace, or further graduate research.  

      The course will be taught with a mix of labs and lecture, with hands-on learning applied as much as possible. Students should expect to become familiarized with basic concepts that relate to becoming an FAA Part 107 commercial pilot. Students will also learn core fundamentals of using UAS for applied Geospatial Data applications.  
  
Students should expect to complete a robust series of readings and online tutorials outside of class sessions. These materials will be fundamental for understanding what the weekly material covers. The student should expect to complete pre-class quizzes on these materials, thus ensuring that all students are aware of what will be covered in the weekly class period. Overall, the objective of this course is to instill the following skills:  
  • The ability to think of UAS data collection in a geospatial manner  
  • The ability to critically think of what type of platform is best suited for the given task/goal, and how best to collect that data with the proper sensor.  
  • The fundamental difference between Radio Control aerial platforms, and those that allow the use of autopilot/ground station technology.   
  • How to survey Ground Control Points (GCP) using current GPS technology, as well understanding the limits of GCP points   
  • Construction of technical style reports in a web or blog based format.  Construction of instructional materials, in both written and video format 
--------------------------------------------------------------------------------------------------------------------------
In order to teach these objectives we decided to base the lectures and labs around an in depth exploration of sensors and FAA part 107 preparation. Sensors were chosen as the focus of this class to ensure that the students have a comprehensive undersanding of sensor operation either early in the major or immediately in the minor.

The first weeks of the course are an investigation of digital photography. We decided to begin with digital photography because of the pervasive nature of digital cameras on UAS platforms. The lectures for this section focus on manual settings for digital cameras. While most digital cameras have good automatic settings, it is important for the students to understand how to manually correct for bad images in data collection. We designed a lab excersize intended to provide students an opportunity to investigate the effects of these settings using a DJI Mavic 2 Pro and DJI Mavic Airs. Using iPads provided by Purdue the students took a picture at each aperture, ISO, and shutter speed available on the vehicles.The students saved these images to their own student folders, and will be using those photos in labs focusing on image processing.






Friday, August 2, 2019

Troubleshooting new equipment

Recently our lab received a Zenmuse XT2 and a PPK unit for one of our M600 Hexacopters. We quickly equipped the M600 with the PPK unit, and an undergraduate flight team began using the vehicle.


Figure 1: M600 with PPK attached

Figure 2: PPK unit

Figure 3: Sony Alpha a6000 camera tied to PPK unit

 We quickly discovered that while the PPK system was logging position data, the attached Sony Alpha a6000 was not capturing images.

After a brief discussion with the flight crew they informed me that while they had correctly hooked up the PPK system and powered it on they had not pressed any of the buttons. We decided that the system was might not have known when the mission was supposed to begin, because it didn't interface with the flight planning software that they were using.

On the next flight the flight crew pressed the control button before arming the vehicle and beginning the flight. They pressed the control button again once the vehicle had landed, but did not hear a shutter noise from the camera. When they returned to the lab to look at the data we found that, once again, the position log existed but the only image found was taken before the vehicle began it's mission. This began an in depth investigation into the problem by myself, the flight crew, and Dr. Hupy. We focused our investigation on the camera, since the PPK system was creating the position log. We began by going through as many settings as we could to better understand the camera. During this process I discovered that the camera has a sleep setting that is set to one minute by default. Since the vehicle usually takes over one minute to get to it's target area it made sense that the camera would go to sleep before the vehicle reached the target area. I set the camera to 30 minutes, and we got the vehicle ready for another flight the following day.

The next day I accompanied the flight crew as a way to get out of the lab for a bit. The flight crew pressed the capture button at the beginning of the flight, and at the end of the flight hearing the shutter sound each time. After returning to the lab we discovered that the only pictures that existed were the ones taken at the beginning and end of the flight.

This was incredibly frustrating and led to another day of troubleshooting. The first thing we did was ensure that the camera settings were how we had previously set them. We discovered that whenever the vehicle is powered off the sleep setting is reset to one minute. After resetting the camera to the settings we wanted the vehicle was taken outside and walked around. The vehicle was supposed to be set to capture an image every so many meters, but while we walked the vehicle around we did not hear shutter sounds unless we pressed the control button. We then reset the camera sleep settings to one minute, and walked the vehicle around pressing the control button ever 45 seconds. We discovered that the camera would not go to sleep if the PPK system was giving it triggers. After these tests we learned that the sleep setting was not the problem, but the problem was more likely to be a problem with the PPK communicating with the camera.

We returned to the lab to investigate the PPK once again. All of us were searching through different sections of the manual. I had taken the PPK file details section, and in this section contained the details of .txt files associated with the equipment. I found the section shown in figure 4, and it looked like a promising find.
Figure 4: Configuration file
As it turned out the default setting for the PPK system is for the system to only trigger when the control button is pressed. This was modified to 2D distance and the vehicle was walked around again. This time the shutter sound could be heard at regular intervals, and was confirmed to work. I look forward to using this system in the future as PPK is a powerful data collection tool.

Wednesday, June 26, 2019

Finding animals with Loc8

Rapidly finding animals is useful in many fields but this post will focus on agriculture. Through cooperation with Purdue's Beef Unit myself and two undergraduates were able to acquire data-sets of cattle in a field. In this data-set there were two populations of cattle that I was interested in, brown and black. I began by creating spectral databases following the rules that I described in my last post, large numbers of individual color files instead of color ranges or single files of individual colors. The databases I created can be seen in figure 1.
Figure 1: Color databases
Figure 2 and 3 show examples of the data-set collected.
Figure 2: Cattle data-set sample 1

Figure 3: Cattle data-set sample 2
Looking at figures 2 and 3 it is clear that most of the cattle are black, and that there are two main groupings. The grouping near the giant puddle and the grouping in separate pens. Within these groupings there are separate groups. The puddle group has the cattle in the puddle and those trailing away from the puddle. The pen group has two groups somewhat near each other in the pens. Only one grouping, the puddle grouping, has a brown cow. I performed 3 tests looking for brown cattle in the data-set manipulating the minimum number of pixels allowed for a positive hit. I began by looking with a minimum pixel of 1 then 2 then 4 and determined that using a minimum pixel of 1 was most effective. Figures 4 through 7 show examples of these results.
Figure 4: Brown Cow example 1
Figure 5: Close up of example 1

Figure 6: Brown Cow example 2
Figure 7: Close up of example 2
In both of these examples the desired animal was found. In figure 5 the close up shows that Loc8 found multiple positive hits on the animal while figure 7 only shows one positive hit. While this is fairly interesting there is no real functional difference, and the software was able to provide the location for the animal in both cases. This test shows the capability of Loc8 to find and Geo-locate a specific animal out of a group of animals. This was possible because the animal had different coloration patterns from the group, but this has potential for future investigation.


Locating one unique animal in a group is useful, but will the software be able to find the location of the larger groups? The biggest challenge with finding the black cattle, or any black object, is preventing shadows from being located instead of the desired objects. Figures 8 and 9 show examples of the puddle group of the cattle.
Figure 8: Puddle group example 1

Figure 9: Puddle group example 2
Figure 8 shows that most of the cattle trailing towards the puddle have been located using this method, and that the cattle in the puddle are all encompassed in a single circle. Figure 9 does not alert to any of the cattle leading up to the puddle, but does alert to the puddle cattle separately. This is interesting because sometimes the groups of cattle are located as a group and sometimes they are located as individuals. I am unsure of why this is, but both methods do provide the location of the animal groups. Figure 10 below shows how Loc8 performed in finding the cattle in separate pens.

Figure 10: Cattle in pens
Figure 10 shows most of the cattle being found in five groups, and one false positive of a shadow. As I stated earlier, when looking for black objects shadows often show up as false positives. This data-set showed surprisingly few false positives from shadow, and I believe this is because of the many shades of black used for the search.


Wednesday, June 19, 2019

Reduced False positives

Figure 1: GCP without false positives


My last Loc8 post focused on reducing false positives while searching for GCPs in a field. That post described the methods in which I reduced the number of incorrectly identified images in a data-set. One problem I described at the end of the post was a large amount of false positives in correctly identified images, like the image shown below.
Figure 2: Correct Identification with false positives
One of the circles in figure 2 has correctly circled one of the pink GCPs, the bottom rightmost circle, and because of this I marked it as correctly identifying the target. While this works for assessing images in a "non-time-critical" environment, it would be much less useful when time is of the essence. While doing this processing I inspected every pixel that was identified, it became monotonous quickly and I found that I was looking through the hits far too quickly. This rapid searching often had me going back through the image to take another look at a point, just to be sure I had correctly identified it as a positive or false positive hit. Since I have worked with this data-set extensively I know where the GCPs are in most images, and the searching still greatly increased the time I spent inspecting each image.

After many attempts to eliminate/reduce false positives in the data-set I had a conversation with Loc8 and was given the idea to create multiple discrete colors and search that. In my last post I created a spectral range using multiple individual colors, and that did reduce the false positives. This time I tried using multiple "ranges" of individual colors, and received 0 false positives. Figure 3 below is an example from this run. For this test I only focused on the pink GCPs.


Figure 3: No false positives
Figure 3 is one of the 15 images that Loc8 flagged as containing a GCP, and none of them contained a false positive. While figure 3 does not show a false positive it does have two false negatives, the GCPs at the top and bottom of the image have not been identified. There are also many images containing GCPs that were not identified. Finding a balance between false positives and false negatives is the next hurdle to clear.



Tuesday, June 4, 2019

Assessing storm damage with geospatial data


A storm came through West Lafayette, spawning a tornado and destroyed one of Purdue's Agricultural barns. We were able to take the class out to the debris field and gather data with two different platforms. One of the goals of this flight was to experiment with using Loc8 and geospatial video to determine the location of the debris in the path. Using these software packages to find debris, could assist communities in disaster recovery by decreasing the amount of time required to clean up after a disaster. A student team processed the debris field data in Pix4D to create an orthomosaic, and 3D mesh.
Figure 1: Orthomosaic
This gives us a rough idea of the size of the debris field, and the direction it is going. f we were to use this image to collect the debris we would run into a problem. The debris collectors would lose track of where they were in the field because the environment is very uniform. This might prevent them from collecting all of the debris on their first attempt, and subsequent flights may need to be performed. By using Loc8 this problem can be removed.
Figure 2: Loc8ed debris
Figure 3: Debris location
Figure 2 shows that Loc8 found a ton of false positives during it's search for the debris in this image. While this isn't perfect it isn't bad. The software still found the debris, and provided us with a GPS coordinate for everything in the image. If we were to send out debris collection teams we could provide them with a list of GPS coordinates for them to search, as long as the images were reviewed beforehand and appropriately flagged/archived (as suggested in Loc8's tutorial videos). The downside to this approach is that the GPS coordinate is for the entire image, so searching would still be necessary. This could be easily mitigated by providing the collection teams with the flagged images for referencing in the field.

The third method that I investigated for debris cleanup was using LineVision geospatial video.
Figure 4: Debris pattern in LineVision

Figure 5: Frame with desired debris



Figure 6: Debris coordinates

Using LineVision I found the same debris that can be seen in Figure 2. This software was able to provide the GPS coordinates of the object inside the image, as opposed to the GPS coordinates where the image was taken. This coordinate was provided after I identified the debris and manually marked it. In disaster cleanup this software could be used to provide cleanup teams with checklists for debris removal.

Each software package used for debris analysis is very capable, but for tracking debris it would seem that LineVision is the best. The capability to remove any sort of guessing from debris removal is very appealing, and could reduce cleanup times. Loc8 provided similar capabilities, but was not able to be as exact as LineVision in this situation. Where Loc8 shines is in situations where manual tagging is impossible or infeasible, and in this situation the debris was very easy to manually tag. Pix4D was not a good choice for locating the debris. While the orthomosaic does include the debris it does not provide much in the way of location information for removal. Where Pix4D was useful was in assessing the damage done to the barn that created the debris, figure 7.

Figure 7: Barn
The condition of the damaged barn surprised all of us. We collected data two days after the storm, and the barn had been completely torn down by then. Had the barn still been standing we would have been able to use Pix4D to assess the damage of the building, but instead we were able to assess the cleanup efforts.







Thursday, May 23, 2019

Reducing false positives in Loc8

Finding GCPs in Loc8 was the topic of my previous blog post. I was able to find many of the points but was given many false positives. I believe this was due to using "Min pixel" at 1 and a large range of colors taken from the target. I also mentioned that the samples I took were affected by the distance from camera to target, the colors blurred together a bit. My first step in reducing false positives was to eliminate this issue, and hopefully this would reduce the color range collected.
Figure 1: GCPm2 samples

Figure 1: GCPa2 samples
When comparing the color ranges collected from aerial imagery (figures 3 and 4) to the ranges collected from the ground we can see that the ground collection has more vibrant colors.
Figure 3: GCPm color range
Figure 4: GCPa color range
Since I decided to change up the data sampling method in this test I decided to keep the settings the same, shown in figure 5. This way I could determine the effect of collecting data from images taken prior to the search.
Figure 5: Settings used
During this round of processing Loc8 alerted me to 18 images, 14 false positives and 4 positives. In my opinion this is already a massive improvement over the previous run. While I was still given mostly false positive alerts I was only given 14 of them instead of 132. The 14 false positives each had one or two alerts in each image which reduced the amount of time I had to spend checking each alert.
Figure 6: False positive
  
Figure 7: Found Aeropoint
 Figure 7 shows a successfully located aeropoint, but also shows two manual GCPs that were not found. Figure 8 zooms in closer to allow for a clearer view of all 3 targets.
Figure 8

Figure 9: Found manual GCP
Figure 9 shows a manual GCP being found, but multiple aeropoints being ignored. The "scents" given to the software were able to find both kinds of GCPs, but still generated a high ratio of false positives to positives. While the ratio was still high this method reduced the number of images flagged by false positives by ~90%, and the total number of false positives by more than that. There were also many false negatives given by this method, and that will need to be corrected for. I believe one way to find a balance between the two could be to set a color range based on an image captured before the flight.



Improving Crew Resource Management

 Improving CRM in UAS operations Crew Resource Management (CRM) is a method in aviation that increases the situational awareness and flexi...