Thesis Defense: Saurabh Morchale
Tuesday, April 26, 2016 at 3:30pm in Manchester 017
Deep Convolutional Neural Networks for Classification of Fused Hyperspectral and LiDAR Data
Convolutional neural networks (CNN) have demonstrated excellent performance on various tasks concerning image classification, object and speech recognition, natural language processing (NLP), and are increasingly gaining popularity with geospatial classification. Hyperspectral (HSI) and Light Detection and Ranging (LiDAR) imaging are complementary modalities which are extensively used together for geospatial data collection in remote sensing. HSI data is used to glean information about the material composition and LiDAR data provides information about the geometry of objects in the scene. This thesis proposes a fusion model for HSI and LiDAR to develop a new combined feature, examines its usage and potential for geospatial classification by applying convolutional neural networks.
Deep convolutional neural networks are implemented on pixel-level fused hyperspectral and LiDAR imagery to assess the classification performance and effectiveness of the proposed fusion model. Two key questions relative to classification performance are addressed: the effect of merging multi-modal data and the effect of uncertainty in the CNN training data. Two recent co-registered HSI and LiDAR datasets are used here to characterize performance. One was collected, over Houston TX, by the University of Houston National Center for Airborne Laser Mapping with NSF sponsorship, and the other was collected, over Gulfport MS, by Universities of Florida and Missouri with NGA sponsorship. Experimental results based on these datasets demonstrate that the proposed method can achieve better classification performance by a margin of over 10% when fused hyperspectral and LiDAR information is used than with hyperspectral data alone and is significantly robust to uncertainties.