Learning to Diversify Deep Belief Networks for Hyperspectral Image Classification
Abstract: In the literature of remote sensing, deep models with multiple layers have demonstrated their potentials in learning the abstract and invariant features for better representation and classification of hyperspectral images. The usual supervised deep models, such as convolutional neural networks, need a large number of labeled training samples to learn their model parameters. However, the real-world world hyperspectral image classification task provides only a limited number of training samples. This paper adopts aanother nother popular deep model, i.e., deep belief networks (DBNs), to deal with this problem. The DBNs allow unsupervised pretraining over unlabeled samples at first and then a supervised fine-tuning tuning over labeled samples. But the usual pretraining and fine-tuning fine method would make many hidden units in the learned DBNs tend to behave very similarly or perform as “dead” (never responding) or “potential over-tolerant” over (always responding) latent factors. These results could negatively affect description ability and thus classification performance of DBNs. To further improve DBN’s performance, this paper develops a new diversified DBN through regularizing pretraining and fine fine-tuning tuning procedures by a diversity promoting prior over latent factors. Moreover, the regulariz regularized pretraining and fine--tuning can be efficiently implemented through usual recursive greedy and back-propagation back learning framework. The experiments over real real-world world hyperspectral images