Adaptive ensemble learning with confidence bounds

Page 1

Adaptive Ensemble Learning With Confidence Bounds

Abstract: Extracting actionable intelligence from distributed, heterogeneous, correlated, and high-dimensional dimensional data sources requires run run-time time processing and learning both locally and globally. In the last decade, a large number of meta-learning meta techniques have been proposed in which local learners make online predictions based on their locally collected data instances, and feed these predictions to an ensemble learner, which fuses them and issues a glo global bal prediction. However, most of these works do not provide performance guarantees or, when they do, these guarantees are asymptotic. None of these existing works provide confidence estimates about the issued predictions or rate of learning guarantees for the ensemble learner. In this paper, we provide a systematic ensemble learning method called Hedged Bandits, which comes with both long-run long (asymptotic) and short-run run (rate of learning) performance guarantees. Moreover, our approach yields performance guar guarantees antees with respect to the optimal local prediction strategy, and is also able to adapt its predictions in a data-driven data manner. We illustrate the performance of Hedged Bandits in the context of medical informatics and show that it outperforms numerous onl online ine and offline ensemble learning methods.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.