Buffer is added to the sample là gì năm 2024
Accurate and high-resolution imagery is essential when extracting features. The model will only be able to identify the palm trees if the pixel size is small enough to distinguish palm canopies. Additionally, to calculate tree health, you'll need an image with spectral bands that will enable you to generate a vegetation health index. You'll find and download the imagery for this study from OpenAerialMap, an open-source repository of high-resolution, multispectral imagery. Show
Explore the dataTo begin the classification process, you'll download an ArcGIS Pro project containing a few bookmarks to guide you through the process of creating training samples.
Create training schemaCreating good training samples is essential when training a deep learning model, or any image classification model. It is also often the most time-consuming step in the process. To provide your deep learning model with the information it needs to extract all the palm trees in the image, you'll create features for a number of palm trees to teach the model what the size, shape, and spectral signature of coconut palms may be. These training samples are created and managed through the Label Objects for Deep Learning tool. Creating a training dataset entails digitizing hundreds of features and can be time consuming. If you do not want to create the training samples, a dataset has been provided in the Results geodatabase in the Provided Results folder. You can advance to the Create image chips section.
Create training samplesTo make sure you're capturing a representative sample of trees in the area, you'll digitize features throughout the image. These features are read into the deep learning model in a specific format called image chips. Image chips are small blocks of imagery cut from the source image. Once you've created a sufficient number of features in the Image Classification pane, you'll export them as image chips with metadata.
Create image chipsThe last step before training the model is exporting your training samples to the correct format as image chips.
In this module, you downloaded and added open-source imagery to a project, created training samples using the Training Samples Manager pane, and exported them to a format compatible with a deep learning model for training. Next, you'll create a deep learning model and identify all the trees on the plantation. Detect palm trees with a deep learning modelBefore you can begin to detect palm trees, you need to train a model. Training a model entails taking your training sample data and putting it through a neural network over and over again. This computationally intensive process will be handled by a geoprocessing tool, but this is how the model will learn what a palm tree is and is not. Once you have a model, you'll apply it to your imagery to automatically identify trees. Train a deep learning modelThe Train Deep Learning Model geoprocessing tool uses the image chips you labeled to determine what combinations of pixels in a given image represent palm trees. You'll use these training samples to train a single-shot detector (SSD) deep learning model. Depending on your computer's hardware, training the model can take more than an hour. It's recommended that your computer be equipped with a dedicated graphics processing unit (GPU). If you do not want to train the model, a deep learning model has been provided to you in the project's Provided Results folder. Optionally, you can skip ahead to the Palm tree detection section of this tutorial.
Palm tree detectionThe bulk of the work in extracting features from imagery is preparing the data, creating training samples, and training the model. Now that these steps have been completed, you'll use a trained model to detect palm trees throughout your imagery. Object detection is a process that typically requires multiple tests to achieve the best results. There are several parameters that you can alter to allow your model to perform best. To test these parameters quickly, you'll try detecting trees in a small section of the image. Once you're satisfied with the results, you'll extend the detection tools to the full image. If you did not train a model in the previous section, a deep learning package has been provided for you in the Provided Results folder. Classifying features is a GPU-intensive process and can take a while to complete depending your computer's hardware. If you choose to not detect the palm trees, results have been provided and you may skip ahead to the Refine detected features section.
Refine detected featuresEnsuring an accurate count of palm trees in important. Since many trees have been counted multiple times, you'll use the Non Maximum Suppression tool to resolve this. However, you have to be careful; palm trees' canopies can overlap. So, you'll remove features that are clearly duplicates of the same tree while ensuring that separate trees with some overlap are not removed.
You've just trained and used a model to detect palm trees. Next, you'll use raster functions to obtain an estimate of vegetation health for each tree detected in your study area. It is important to realize that your model's results might not be perfect the first time. Training and implementing a deep learning model is a process that can take several iterations to provide the best results. Better results can be achieved by doing the following:
Estimate vegetation healthIn the previous module, you used a deep learning model to extract coconut palm trees from imagery. In this module, you'll use the same imagery to estimate vegetation health by calculating a vegetation health index. To assess vegetation health, you'll calculate the Visible Atmospherically Resistant Index (VARI), which was developed as an indirect measure of leaf area index (LAI) and vegetation fraction (VF) using only reflectance values from the visible wavelength:
where Rr, Rg, and Rb are reflectance values for the red, green, and blue bands, respectively (Gitelson et al. 2002). Typically, you would use reflectance values in both the visible and the near infrared (NIR) wavelength bands to estimate vegetation health, as with the normalized difference vegetation index (NDVI). However, the imagery you downloaded from OpenAerialMap is a multiband image with three bands, all in the visible electromagnetic spectrum, so you'll use the VARI instead. Calculate VARIThe VARI measurement requires the input of the three bands within the OpenAerialMap imagery. To calculate VARI, you'll use the Band Arithmetic raster function. Raster functions are quicker than geoprocessing tools because they don't create a new raster dataset. Instead, they perform real-time analysis on pixels as you pan and zoom.
Extract VARI to Coconut PalmsHaving a raster layer showing VARI is helpful, but not necessarily actionable. To figure out which trees need attention, you want to know the average VARI for each individual tree. To find the VARI value for each tree, you'll extract the underlying average VARI value and symbolize them to show which trees are healthy and which need maintenance. First, you'll convert the polygon features to points.
Optional: Assign field tasks and monitor project progressOne of the biggest benefits of using ArcGIS Pro for feature extraction and imagery analysis is that it can be integrated with the entire ArcGIS platform. In the last tutorial, you used the deep learning tools in ArcGIS Pro to identify coconut palm trees from imagery. The palm trees can be stored as features in a feature class that's amenable for use in a GIS. To extend the workflow, you can publish your results to the cloud, configure a web application template for quality assurance, assign tree inspection tasks to workers in the field, and monitor the progress of the project using a dashboard. Publish to ArcGIS Online To use configurable apps to work with your data, you need to publish the palm trees as a feature service in ArcGIS Online orArcGIS Enterprise. In ArcGIS Pro, right-click the PalmTreesVARI layer in the Contents pane and select Sharing, then select Share as Web Layer. It will publish to your ArcGIS Online account. Learn more about publishing a feature service Use app templates to review deep learning accuracy Deep learning tools provide results with accuracy that is proportional to the accuracy of the training samples and the quality of the trained model. In other words, the results are not always perfect. You can assess the quality of the model results by checking through the trees where the Confidence score, stored in the deep learning result, is lower than a given value. Instead of zooming to each record using an attribute filter in ArcGIS Pro, the Image Visit configurable web app template allows you to quickly review the accuracy of your results in a web application. Learn more about the Image Visit app Use ArcGIS Workforce to perform field verification ArcGIS Workforce is a mobile app solution that uses the location of features to coordinate your field workforce. You can use the Workforce app to assign tasks to members of your organization so that all the trees with a VARI score that is listed as Needs Inspection can be assigned to someone in the field, checked, and marked with a suggested treatment. Learn more about ArcGIS Workforce Use ArcGIS Dashboards to monitor project progress Finally, you can monitor the progress of the assignments dispatched in your ArcGIS Workforce project using ArcGIS Dashboards. ArcGIS Dashboards is a configurable web app that provides visualization and analytics for a real-time operational view of people, services, and tasks. Learn more about getting started with ArcGIS Dashboards In this tutorial, you obtained open-source drone imagery and created training samples of palm trees in the image. Those image chips were provided to a data scientist as image chips and used by a trained deep learning model to extract more than 11,000 palm trees in the image. You learned about deep learning and image analysis, as well as configurable apps across the ArcGIS system. You can use this workflow for any number of tasks, if you have the imagery and knowledge of deep learning models. For example, you can use these tools to assess structural damage resulting from natural disasters, count vehicles in an urban area, or find structures near geological danger zones. |