Optimal Rule Caching and Lossy Compression for Longest Prefix Matching
Abstract: Packet classification is a building block in many network services, such as routing, monitoring, and policy enforcement. In commodity switches, classification is often performed by memory components of various rule matching patterns (longest prefix match, ternary matches, exact match, and so on). The memory components are fast but expensive and power power-hungry hungry with power consumption proportional to their size. In this paper, we study the applicability of rule caching and lossy compression to create packet clas classifiers sifiers requiring much less memory than the theoretical size limits of the semantically semantically-equivalent equivalent representations, enabling significant reduction in their cost and power consumption. This paper focuses on the longest prefix matching. Our objective is to ffind a limited-size size longest prefix match classifier that can correctly classify a high portion of the traffic, so that it can be implemented in commodity switches with classification modules of restricted size. While for the lossy compression scheme a small amount of traffic might observe classification errors, a special indication is returned for traffic that cannot be classified in the rule caching scheme. We develop optimal dynamicdynamic programming algorithms for both problems and describe how to treat the small sma amount of traffic that cannot be classified. We generalize our solutions for a wide range of classifiers with different similarity metrics. We evaluate their performance on real classifiers and traffic traces and show that in some cases we