Fusion of RADARSAT-2 and multispectral optical remote sensing data for LULC extraction in a tropical agricultural area
- Publication Type:
- Journal Article
- Citation:
- Geocarto International, 2017, 32 (7), pp. 735 - 748
- Issue Date:
- 2017-07-03
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
4_16_2018_Fusion of .pdf | Published Version | 8.78 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2016 Informa UK Limited, trading as Taylor & Francis Group. In this study, we investigated the performance of different fusion and classification techniques for land cover mapping in Hilir Perak, Peninsula Malaysia using RADAR and Landsat-8 images in a predominantly agricultural area. The fusion methods used are Brovey Transform, Wavelet Transform, Ehlers and Layer Stacking and their results classified into seven different land cover classes which include (1) pixel-based classifiers (spectral angle mapper (SAM), maximum likelihood (ML), support vector machine (SVM)) and (2) Object-based (rule-based and standard nearest neighbour (NN)) classifiers. The result shows that pixel-based classification achieved maximum accuracy of the optical data classification using SVM in Landsat-8 with 74.96% accuracy compared to SAM and ML. For multisource data classification, the highest overall accuracy recorded for layer stacking (SVM) was 79.78%, Ehlers fusion (SVM) with 45.57%, Brovey fusion (SVM) with 63.70% and Wavelet fusion (SVM) 61.16%. And for object-based classifiers, the overall classification accuracy is 95.35% for rule-based and 76.33% for NN classifier, respectively. Based on the analysis of their performances, object-based and the rule-based classifiers produced the best classification accuracy from the fused images.
Please use this identifier to cite or link to this item: