Dimensionality Reduction And Classification On Hyperspectral Images Using Python Save

In this repository, You can find the files which implement dimensionality reduction on the hyperspectral image(Indian Pines) with classification.

Project README

Dimensionality reduction and classification on Hyperspectral Image Using Python

Authors

Prerequisites

The prerequisites to better understand the code and concept are:

    * Python
    * MatLab
    * Linear Algebra

Installation

  • This project is fully based on python. So, the necessary modules needed for computaion are:
    * Numpy
    * Sklearn
    * Matplotlib
    * Pandas
  • The commands needed for installing the above modules on windows platfom are:

    pip install numpy
    pip install sklearn
    pip install matplotlib
    pip install pandas
  • we can verify the installation of modules by importing the modules. For example:

    import numpy
    from sklearn.decomposition import PCA 
    import matplotlib.pyplot as plt
    import pandas as pd

Results

  • Here we are performing the the dimensionality reduction on one of the widely used hyperspectral image Indian Pines
  1. The result of the indian_pines_pca.py is shown below:

    • It initial result is a bargraph for the first 10 Pricipal Components according to their variance ratio's :

    indian_pines_varianve_ratio

    Since, the initial two principal COmponents have high variance. so, we will select the initial two PC'S.

    • It second result is a scatter plot for the first 10 Pricipal Components is :

    indian_pines_after_pca_with_2pc

    • The above program resullts a dimensionally reduced csvfile .
  2. The result of the indian_pines_knnc.py is given below:

    • The above program will classify the Indian Pines dataset before Principal Component Analysis(PCA). The classifier here used for classification is K-Nearest Neighbour Classifier (KNNC)
    • The time taken for classification is:

    indian_pines_classification_before_pca

    • Then the classification accuracy of indian pines dataset before PCA is:

    indian_pines_accuracy_before_pca

  3. The result of the indian_pines_knnc_after_pca.py

    • Then the resultant classification accuracy of indian pines dataset after PCA is:

      indian_pines_accuracy_after_pca

Conclusion :

  • By performing PCA on the corrected indian pines dataset results 100 Principal Components(PC'S).

  • since, the initial two Principal Components(PC'S) has 92.01839071674918 variance ratio. we selected two only.

  • Initially the dataset contains the dimensions 21025 X 200 is drastically reduced to 21025 X 2 dimensions.

  • The time taken for classification before and after Principal Component Analysis(PCA) is:

    Dataset Accuracy Time Taken
    Before PCA 72.748890 17.6010
    After PCA 60.098187 0.17700982
  • Hence, the time has been reduced with a lot of difference and the classification accuracy(C.A) also reduced but the C.A can increased little bit by varying the 'k' value.

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Open Source Agenda is not affiliated with "Dimensionality Reduction And Classification On Hyperspectral Images Using Python" Project. README Source: syamkakarla98/Dimensionality-reduction-and-classification-on-Hyperspectral-Images-Using-Python

Open Source Agenda Badge

Open Source Agenda Rating