ETPL NW - 001
STAMP: Enabling Privacy-Preserving Location Proofs for Mobile Users
Location-based services are quickly becoming immensely popular. In addition to services based on users' current location, many potential services rely on users' location history, or their spatial-temporal provenance. Malicious users may lie about their spatial-temporal provenance without a carefully designed security system for users to prove their past locations. In this paper, we present the SpatialTemporal provenance Assurance with Mutual Proofs (STAMP) scheme. STAMP is designed for adhoc mobile users generating location proofs for each other in a distributed setting. However, it can easily accommodate trusted mobile users and wireless access points. STAMP ensures the integrity and non-transferability of the location proofs and protects users' privacy. A semi-trusted Certification Authority is used to distribute cryptographic keys as well as guard users against collusion by a lightweight entropy-based trust evaluation approach. Our prototype implementation on the Android platform shows that STAMP is low-cost in terms of computational and storage resources. Extensive simulation experiments show that our entropy-based trust model is able to achieve high collusion detection accuracy.
ETPL NW - 002
Optimal Resource Allocation Over Time and Degree Classes for Maximizing Information Dissemination in Social Networks
We study the optimal control problem of allocating campaigning resources over the campaign duration and degree classes in a social network. Information diffusion is modelled as a Susceptible-Infected epidemic and direct recruitment of susceptible nodes to the infected (informed) class is used as a strategy to accelerate the spread of information. We formulate an optimal control problem for optimizing a net reward function, a linear combination of the reward due to information spread and cost due to application of controls. The time varying resource allocation and seeds for the epidemic are jointly optimized. A problem variation includes a fixed budget constraint. We prove the existence of a solution for the optimal control problem, provide conditions for uniqueness of the solution, and prove some structural results for the controls (e.g., controls are non-increasing functions of time). The solution technique uses Pontryagin's Maximum Principle and the forward-backward sweep algorithm (and its modifications) for numerical computations. Our formulations lead to large optimality systems with up to about 200 differential equations and allow us to study the effect of network topology (ErdosRĂŠnyi/scale-free) on the controls. Results reveal that the allocation of campaigning resources to various degree classes depends not only on the network topology but also on system parameters such as cost/abundance of resources. The optimal strategies lead to significant gains over heuristic strategies for various model parameters. Our modelling approach assumes uncorrelated network, however, we find the approach useful for real networks as well. This work is useful in product advertising, political and crowd funding campaigns in social networks.
ETPL NW - 003
Fast and Scalable Range Query Processing With Strong Privacy Protection for Cloud Computing
Privacy has been the key road block to cloud computing as clouds may not be fully trusted. This paper is concerned with the problem of privacy-preserving range query processing on clouds. Prior schemes are weak in privacy protection as they cannot achieve index in distinguishability, and therefore allow the cloud to statistically estimate the values of data and queries using domain knowledge and history query results. In this paper, we propose the first range query processing scheme that achieves index in distinguishability under the in distinguishability against chosen keyword attack (IND-CKA). Our key idea is to organize indexing elements in a complete binary tree called PBtree, which satisfies structure in distinguishability (i.e., two sets of data items have the same PBtree structure if and only if the two sets have the same number of data items) and node in distinguishability (i.e., the values of PBtree nodes are completely random and have no statistical meaning). We prove that our scheme is secure under the widely adopted IND-CKA security model. We propose two algorithms, namely PBtree traversal width minimization and PBtree traversal depth minimization, to improve query processing efficiency. We prove that the worst-case complexity of our query processing algorithm using PBtree is, where the total number of data items is and is the set of data items in the query result. We implemented and evaluated our scheme on a real-world dataset with 5 million items. For example, for a query whose results contain 10 data items, it takes only 0.17 ms.
ETPL NW - 004
Privacy Preserving Ranked Multi-Keyword Search for Multiple Data Owners in Cloud Computing
With the advent of cloud computing, it has become increasingly popular for data owners to outsource their data to public cloud servers while allowing data users to retrieve this data. For privacy concerns, secure searches over encrypted cloud data has motivated several research works under the single owner model. However, most cloud servers in practice do not just serve one owner; instead, they support multiple owners to share the benefits brought by cloud computing. In this paper, we propose schemes to deal with privacy preserving ranked multi-keyword search in a multi-owner model (PRMSM). To enable cloud servers to perform secure search without knowing the actual data of both keywords and trapdoors, we systematically construct a novel secure search protocol. To rank the search results and preserve the privacy of relevance scores between keywords and files, we propose a novel additive order and privacy preserving function family. To prevent the attackers from eavesdropping secret keys and pretending to be legal data users submitting searches, we propose a novel dynamic secret key generation protocol and a new data user authentication protocol. Furthermore, PRMSM supports efficient data user revocation. Extensive experiments on real-world datasets confirm the efficacy and efficiency of PRMSM.
ETPL NW - 005
Attribute-Based Data Sharing Scheme Revisited in Cloud Computing
Cipher text-policy attribute-based encryption (CP-ABE) is a very promising encryption technique for secure data sharing in the context of cloud computing. Data owner is allowed to fully control the access policy associated with his data which to be shared. However, CP-ABE is limited to a potential security risk that is known as key escrow problem, whereby the secret keys of users have to be issued by a trusted key authority. Besides, most of the existing CP-ABE schemes cannot support attribute with arbitrary state. In this paper, we revisit attribute-based data sharing scheme in order to solve the key escrow issue but also improve the expressiveness of attribute, so that the resulting scheme is more friendly to cloud computing applications. We propose an improved two-party key issuing protocol that can guarantee that neither key authority nor cloud service provider can compromise the whole secret key of a user individually. Moreover, we introduce the concept of attribute with weight, being provided to enhance the expression of attribute, which can not only extend the expression from binary to arbitrary state, but also lighten the complexity of access policy. Therefore, both storage cost and encryption complexity for a cipher text are relieved. The performance analysis and the security proof show that the proposed scheme is able to achieve efficient and secure data sharing in cloud computing.
ETPL NW - 006
Optimal Secrecy Capacity-Delay Tradeoff in Large-Scale Mobile Ad Hoc Networks
In this paper, we investigate the impact of information-theoretic secrecy constraint on the capacity and delay of mobile ad hoc networks (MANETs) with mobile legitimate nodes and static eavesdroppers whose location and channel state information (CSI) are both unknown. We assume n legitimate nodes move according to the fast i.i.d. mobility pattern and each desires to communicate with one randomly selected destination node. There are also nν static eavesdroppers located uniformly in the network and we assume the number of eavesdroppers is much larger than that of legitimate nodes, i.e., ν > 1. We propose a novel simple secure communication model, i.e., the secure protocol model, and prove its equivalence to the widely accepted secure physical model under a few technical assumptions. Based on the proposed model, a framework of analyzing the secrecy capacity and delay in MANETs is established. Given a delay constraint D, we find that the optimal secrecy throughput capacity is ~Θ(W((D/n))(2/3)), where W is the data rate of each link. We observe that: 1) the capacity-delay tradeoff is independent of the number of eavesdroppers, which indicates that adding more eavesdroppers will not degenerate the performance of the legitimate network as long as ν > 1; 2) the capacity-delay tradeoff of our paper outperforms the previous result Θ((1/nψe)) in , where ψe=nν-1=ω(1) is the density of the eavesdroppers. Throughout this paper, for functions f(n) and g(n), we denote f(n)=o(g(n)) if limn→∞(f(n)/g(n))=0; f(n)=ω(g(n)) if g(n)=o(f(n)); f(n)=O(g(n)) if there is a positive constant c such that f(n) ≤ cg(n) for sufficiently large n; f(n)=Ω(g(n)) if g(n)=O(f(n)); f(n)=Θ(g(n)) if both f(n)=O(g(n)) and f(n)=Ω(g(n)) hold. Besides, the order notation ~Θ omits the polylogarithmic factors for better readability.
ETPL NW -007
BCCC: An Expandable Network for Data Centers
Designing a cost-effective network topology for data centers that can deliver sufficient bandwidth and consistent latency performance to a large number of servers has been an important and challenging problem. Many server-centric data center network topologies have been proposed recently due to their significant advantage in cost efficiency and data center agility, such as BCube, FiConn, and Bidimensional Compound Network (BCN). However, existing server-centric topologies are either not expandable or demanding prohibitive expansion cost. As the scale of data centers increases rapidly, the lack of expandability in existing server-centric data center networks imposes a severe obstacle for data center upgrade. In this paper, we present a novel server-centric data center network topology called BCube connected crossbars (BCCCs), which can provide good network performance using inexpensive commodity off-the-shelf switches and commodity servers with only two network interface card (NIC) ports. A significant advantage of BCCC is its good expandability. When there is a need for expansion, we can easily add new servers and switches into the existing BCCC with little alteration of the existing structure. Meanwhile, BCCC can accommodate a large number of servers while keeping a very small network diameter. A desirable property of BCCC is that its diameter increases only linearly to the network order (i.e., the number of dimensions), which is superior to most of the existing servercentric networks, such as FiConn and BCN, whose diameters increase exponentially with network order. In addition, there are a rich set of parallel paths with similar length between any pair of servers in BCCC, which enables BCCC to not only deliver sufficient bandwidth capacity and predictable latency to end hosts, but also provide graceful performance degradation in case of component failure. We conduct comprehensive comparisons between BCCC with other popular server-centric network topologies, suc- as FiConn and BCN. We also propose an effective addressing scheme and routing algorithms for BCCC. We show that BCCC has significant advantages over the existing server-centric topologies in many important metrics, such as expandability, server port utilization, and network diameter.
ETPL NW - 008
Identification of Boolean Networks Using Premined Network Topology Information
This brief aims to reduce the data requirement for the identification of Boolean networks (BNs) by using the premined network topology information. First, a matching table is created and used for sifting the true from the false dependences among the nodes in the BNs. Then, a dynamic extension to matching table is developed to enable the dynamic locating of matching pairs to start as soon as possible. Next, based on the pseudocommutative property of the semitensor product, a positiontransform mining is carried out to further improve data utilization. Combining the above, the topology of the BNs can be premined for the subsequent identification. Examples are given to illustrate the efficiency of reducing the data requirement. Some excellent features, such as the online and parallel processing ability, are also demonstrated.
ETPL NW - 009
An Integrated Systematic Approach to Designing Enterprise Access Control
Today, the network design process remains ad hoc and largely complexity agnostic, often resulting in suboptimal networks characterized by excessive amounts of dependence and commands in device configurations. The unnecessary high configuration complexity can lead to a huge increase in both the amount of manual intervention required for managing the network and the likelihood of configuration errors, and thus must be avoided. In this paper, we present an integrated top-down design approach and show how it can minimize the unnecessary configuration complexity in realizing reachability-based access control, a key network design objective that involves designing three distinct network elements: virtual local-area network (VLAN), IP address, and packet filter. Capitalizing on newly developed abstractions, our approach integrates the design of these three elements into a unified framework by systematically modeling how the design of one element may impact the complexity of other elements. Our approach goes substantially beyond the current divide-and-conquer approach that designs each element in complete isolation, and enables minimizing the combined complexity of all elements. Specifically, two new optimization problems are formulated, and novel algorithms and heuristics are developed to solve the formulated problems. Evaluation on a large campus network shows that our approach can effectively reduce the packet filter complexity and VLAN trunking complexity by more than 85% and 70%, respectively, when compared with the ad hoc approach currently used by the operators.
ETPL NW - 010
A Difference Resolution Approach to Compressing Access Control Lists
Access control lists (ACLs) are the core of many networking and security devices. As new threats and vulnerabilities emerge, ACLs on routers and firewalls are getting larger. Therefore, compressing ACLs is an important problem. In this paper, we propose a new approach, called Diplomat, to ACL compression. The key idea is to transform higher dimensional target patterns into lower dimensional patterns by dividing the original pattern into a series of hyperplanes and then resolving differences between two adjacent hyperplanes by adding rules that specify the differences. This approach is fundamentally different from prior ACL compression algorithms and is shown to be very effective. We implemented Diplomat and conducted side-by-side comparison with the prior Firewall Compressor, TCAM Razor, and ACL Compressor algorithms on real life classifiers. Our experimental results show that Diplomat outperforms all of them on most of our real-life classifiers, often by a considerable margin, particularly as classifier size and complexity increases. In particular, on our largest ACLs, Diplomat has an average improvement ratio of 34.9% over Firewall Compressor on range-ACLs, of 14.1% over TCAM Razor on prefix-ACLs, and 8.9% over ACL Compressor on mixed-ACLs.
ETPL NW - 011
iPath: Path Inference in Wireless Sensor Networks
Recent wireless sensor networks (WSNs) are becoming increasingly complex with the growing network scale and the dynamic nature of wireless communications. Many measurement and diagnostic approaches depend on per-packet routing paths for accurate and fine-grained analysis of the complex network behaviors. In this paper, we propose iPath, a novel path inference approach to reconstructing the per-packet routing paths in dynamic and large-scale networks. The basic idea of iPath is to exploit high path similarity to iteratively infer long paths from short ones. iPath starts with an initial known set of paths and performs path inference iteratively. iPath includes a novel design of a lightweight hash function for verification of the inferred paths. In order to further improve the inference capability as well as the execution efficiency, iPath includes a fast bootstrapping algorithm to reconstruct the initial set of paths. We also implement iPath and evaluate its performance using traces from large-scale WSN deployments as well as extensive simulations. Results show that iPath achieves much higher reconstruction ratios under different network settings compared to other state-of-the-art approaches.
ETPL NW - 012
Least Cost Influence Maximization across Multiple Social Networks
Recently, in online social networks (OSNs), the least cost influence (LCI) problem has become one of the central research topics. It aims at identifying a minimum number of seed users who can trigger a wide cascade of information propagation. Most of existing literature investigated the LCI problem only based on an individual network. However, nowadays users often join several OSNs such that information could be spread across different networks simultaneously. Therefore, in order to obtain the best set of seed users, it is crucial to consider the role of overlapping users under this circumstances. In this article, we propose a unified framework to represent and analyze the influence diffusion in multiplex networks. More specifically, we tackle the LCI problem by mapping a set of networks into a single one via lossless and lossy coupling schemes. The lossless coupling scheme preserves all properties of original networks to achieve high-quality solutions, while the lossy coupling scheme offers an attractive alternative when the running time and memory consumption are of primary concern. Various experiments conducted on both real and synthesized datasets have validated the effectiveness of the coupling schemes, which also provide some interesting insights into the process of influence propagation in multiplex networks.
ETPL NW - 013
Optimal Partial Relaying for Energy-Harvesting Wireless Networks
In this paper, we asses the benefits of using partial relaying in energy-harvesting networks. We consider a system composed of a source, a relay, and a destination. Each of the source and the relay has energyharvesting capability and generates its own traffic. The source is helped by the relay through a partial relaying network-level cooperation protocol. The relay regulates the arrivals from the source by accepting only a proportion of the successfully received packets at the relay. The relaying parameter, which determines the proportion of packets to be accepted, is selected based on the parameters of the network to ensure the stability of the source and the relay data queues. In this work, we provide an exact characterization of the stability region of the network. We derive the optimal value of the relaying parameter to maximize the stable throughput of the source for a given data arrival rate to the relay. Also, we compare the stability region of the proposed strategy with partial relaying to the stability regions of simple transmission strategies. Finally, we consider the problem of network utility optimization in which we optimize over the value of the relaying parameter for a given pair of data arrival rates for the source and the relay.
ETPL NW - 014
GreenCoMP: Energy-Aware Cooperation for Green Cellular Networks
Switching off base stations (BSs) is an effective and efficient energy-saving solution for green cellular networks. The previous works focus mainly on when to switch off BSs without sacrificing the traffic demands of current active users, and then enlarge the coverage of the stay-on cells to cover as much more users as possible. Based on this objective, both constant power and transmission power of each BS become the major energy consumption sources. However, the transmission powers of enlarged cells, which have not been taken into account in previous research, are not negligible as compared to other energy consumption sources. To tackle this problem, we observe that the transmission power of one specific BS could be reduced via cooperation among two or more BSs, which is typically used to improve the throughput or enhance the spectrum efficiency in wireless systems. The challenges come mainly from how to jointly consider which BSs to switch off and how to cooperate among active-mode BSs. In this paper, we design energy-aware cooperation strategies that ensure that our system is energysaving while satisfying user demands. To cope with sleep-mode BSs and perform cooperation among active BSs, we formulate this problem as a binary integer programming problem, and prove it is NPhard. Based on our formulation, we derive a performance lower bound for this problem via Lagrangian Relaxation with search enumeration. Furthermore, we propose two heuristic algorithms accounting for the properties of energy savings and the constraints of bandwidth resources. The simulation results show that our algorithms outperform pure power control mechanisms that do not consider the transmission power and pure cooperation without power control in terms of the total consumed energy. We also observe that larger cooperative size does not imply a better strategy under different scenarios. Compared to the total consumed energy given that all BSs are turned on, our algorithms can save up to 60%- of energy. This demonstrates that our methods are indeed efficient energy-saving cooperation strategies for green cellular networks.
ETPL NW - 015
Delay-Constrained Caching in Cognitive Radio Networks
In cognitive radio networks, unlicensed users can use under-utilized licensed spectrum to achieve substantial performance improvement. To avoid interference with licensed users, unlicensed users must vacate the spectrum when it is accessed by licensed (primary) users. Since it takes some time for unlicensed users to switch to other available channels, the ongoing data transmissions may have to be interrupted and the transmission delay can be significantly increased. This makes it hard for cognitive radio networks to meet the delay constraints of many applications. In this paper, we develop caching techniques to address this problem. We formulate the cache placement problem in cognitive radio networks as an optimization problem, where the goal is to minimize the total cost, subject to some delay constraint, i.e., the data access delay can be statistically bounded. To solve this problem, we propose a cost-based approach to minimize the caching cost, and design a delay-based approach to satisfy the delay constraint. Then, we combine them and propose a distributed hybrid approach to minimize the caching cost subject to the delay constraint. Simulation results show that our approaches outperform existing caching solutions in terms of total cost and delay constraint, and the hybrid approach performs the best among the approaches satisfying the delay constraint.
ETPL NW - 016
Performance Analysis of Mobile Data Offloading in Heterogeneous Networks
An unprecedented increase in the mobile data traffic volume has been recently reported due to the extensive use of smartphones, tablets and laptops. This is a major concern for mobile network operators, who are forced to often operate very close to their capacity limits. Recently, different solutions have been proposed to overcome this problem. The deployment of additional infrastructure, the use of more advanced technologies (LTE), or offloading some traffic through Femtocells and WiFi are some of the solutions. Out of these, WiFi presents some key advantages such as its already widespread deployment and low cost. While benefits to operators have already been documented, it is less clear how much and under what conditions the user gains as well. Additionally, the increasingly heterogeneous deployment of cellular networks (partial 4G coverage, small cells, etc.) further complicates the picture regarding both operator- and user-related performance of data offloading. To this end, in this paper we propose a queueing analytic model that can be used to understand the performance improvements achievable by WiFibased data offloading, as a function of WiFi availability and performance, user mobility and traffic load, and the coverage ratio and respective rates of different cellular technologies available. We validate our theory against simulations for realistic scenarios and parameters, and provide some initial insights as to the offloading gains expected in practice.
ETPL NW - 017
QoS-Aware Scheduling of Workflows in Cloud Computing Environments
Cloud Computing has emerged as a service model that enables on-demand network access to a large number of available virtualized resources and applications with a minimal management effort and a minor price. The spread of Cloud Computing technologies allowed dealing with complex applications such as Scientific Workflows, which consists of a set of intensive computational and data manipulation operations. Cloud Computing helps such Workflows to dynamically provision compute and storage resources necessary for the execution of its tasks thanks to the elasticity asset of these resources. However, the dynamic nature of the Cloud incurs new challenges, as some allocated resources may be overloaded or out of access during the execution of the Workflow. Moreover, for data intensive tasks, the allocation strategy should consider the data placement constraints since data transmission time can increase notably in this case which implicates the increase of the overall completion time and cost of the Workflow. Likewise, for intensive computational tasks, the allocation strategy should consider the type of the allocated virtual machines, more specifically its CPU, memory and network capacities. Yet, a critical challenge is how to efficiently schedule the Workflow tasks on Cloud resources to optimize its overall quality of service. In this paper, we propose a QoSaware algorithm for Scientific Workflows scheduling that aims toimprove the overall quality of service (QoS) by considering the metrics of execution time, data transmission time, cost, resources availability and data placement constraints. We extended the Parallel Cat Swarm Optimization (PCSO) algorithm to implement our proposed approach. We tested our algorithm within two sample Workflows of different scales and we compared the results to those given by the standard PSO, the CSO and the PCSO algorithms. The results show that our proposed algorithm improves the overall quality of service of the tested Workflows.
ETPL NW - 018
Optimality of Fast Matching Algorithms for Random Networks with Applications to Structural Controllability
Network control refers to a very large and diverse set of problems including controllability of linear time-invariant dynamical systems, where the objective is to select an appropriate input to steer the network to a desired state. There are many notions of controllability, one of them being structural controllability, which is intimately connected to finding maximum matchings on the underlying network topology. In this work, we study fast, scalable algorithms for finding maximum matchings for a large class of random networks. First, we illustrate that degree distribution random networks are realistic models for real networks in terms of structural controllability. Subsequently, we analyze a popular, fast and practical heuristic due to Karp and Sipser as well as a simplification of it. For both heuristics, we establish asymptotic optimality and provide results concerning the asymptotic size of maximum matchings for an extensive class of random networks.
ETPL NW - 019
The Role of Information in Distributed Resource Allocation
The goal in networked control of multiagent systems is to derive desirable collective behavior through the design of local control algorithms. The information available to the individual agents, either attained through communication or sensing, invariably defines the space of admissible control laws. Hence, informational restrictions impose constraints on achievable performance guarantees. This paper provides one such constraint with regard to the efficiency of the resulting stable solutions for a class of networked submodular resource allocation problems with application to covering problems. When the agents have full information regarding the system, the efficiency of the resulting stable solutions is guaranteed to be within 50% of optimal. However, when the agents have only localized information about the system, which is a common feature of many well-studied control designs, the efficiency of the resulting stable solutions can be 1=n of optimal, where n is the number of agents. Consequently, in general such control designs cannot guarantee that systems comprised of n agents can perform any better than a system comprised of a single agent for identical system conditions. The last part of this paper focuses on a specific resource allocation problem, a static sensor coverage problem, and provides an algorithm that overcomes this limitation by allowing the agents to communicate minimally with neighboring agents.
ETPL NW - 020
An Optimized Virtual Load Balanced Call Admission Controller for IMS Cloud Computing
Network functions virtualization provides opportunities to design, deploy, and manage networking services. It utilizes Cloud computing virtualization services that run on high-volume servers, switches and storage hardware to virtualize network functions. Virtualization techniques can be used in IP Multimedia Subsystem (IMS) cloud computing to develop different networking functions (e.g. load balancing and call admission control). IMS network signaling happens through Session Initiation Protocol (SIP). An open issue is the control of overload that occurs when an SIP server lacks sufficient CPU and memory resources to process all messages. This paper proposes a virtual load balanced call admission controller (VLB-CAC) for the cloud-hosted SIP servers. VLB-CAC determines the optimal “call admission rates” and “signaling paths” for admitted calls along with the optimal allocation of CPU and memory resources of the SIP servers. This optimal solution is derived through a new linear programming model. This model requires some critical information of SIP servers as input. Further, VLB-CAC is equipped with an autoscaler to overcome resource limitations. The proposed scheme is implemented in SAVI (Smart Applications on Virtual Infrastructure) which serves as a virtual testbed. An assessment of the numerical and experimental results demonstrates the efficiency of the proposed work.
ETPL NW - 021
CCDN: Content-Centric Data Center Networks
Data center networks continually seek higher network performance to meet the ever increasing application demand. Recently, researchers are exploring the method to enhance the data center network performance by intelligent caching and increasing the access points for hot data chunks. Motivated by this, we come up with a simple yet useful caching mechanism for generic data centers, i.e., a server caches a data chunk after an application on it reads the chunk from the file system, and then uses the cached chunk to serve subsequent chunk requests from nearby servers. To turn the basic idea above into a practical system and address the challenges behind it, we design content-centric data center networks (CCDNs), which exploits an innovative combination of content-based forwarding and location [Internet Protocol (IP)]-based forwarding in switches, to correctly locate the target server for a data chunk on a fully distributed basis. Furthermore, CCDN enhances traditional content-based forwarding to determine the nearest target server, and enhances traditional location (IP)-based forwarding to make high utilization of the precious memory space in switches. Extensive simulations based on real-world workloads and experiments on a test bed built with NetFPGA prototypes show that, even with a small portion of the server's storage as cache (e.g., 3%) and with a modest content forwarding information base size (e.g., 1000 entries) in switches, CCDN can improve the average throughput to get data chunks by 43% compared with a pure Hadoop File System (HDFS) system in a real data center.
ETPL NW - 022
Adaptive Fault-Tolerant Synchronization Control of a Class of Complex Dynamical Networks with General Input Distribution Matrices and Actuator Faults
This paper is concerned with the problem of adaptive fault-tolerant synchronization control of a class of complex dynamical networks (CDNs) with actuator faults and unknown coupling weights. The considered input distribution matrix is assumed to be an arbitrary matrix, instead of a unit one. Within this framework, an adaptive fault-tolerant controller is designed to achieve synchronization for the CDN. Moreover, a convex combination technique and an important graph theory result are developed, such that the rigorous convergence analysis of synchronization errors can be conducted. In particular, it is shown that the proposed fault-tolerant synchronization control approach is valid for the CDN with both time-invariant and time-varying coupling weights. Finally, two simulation examples are provided to validate the effectiveness of the theoretical results.
ETPL NW - 023
Distributed Long-Term Base Station Clustering in Cellular Networks using Coalition Formation
Interference alignment (IA) is a promising technique for interference mitigation in multicell networks due to its ability to completely cancel the intercell interference through linear precoding and receive filtering. In small networks, the amount of required channel state information (CSI) is modest and IA is therefore typically applied jointly over all base stations. In large networks, where the channel coherence time is short in comparison to the time needed to obtain the required CSI, base station clustering must be applied however. We model such clustered multicell networks as a set of coalitions, where CSI acquisition and IA precoding is performed independently within each coalition. We develop a long-term throughput model which includes both CSI acquisition overhead and the level of interference mitigation ability as a function of the coalition structure. Given the throughput model, we formulate a coalitional game where the involved base stations are the rational players. Allowing for individual deviations by the players, we formulate a distributed coalition formation algorithm with low complexity and low communication overhead that leads to an individually stable coalition structure. The dynamic clustering is performed using only long-term CSI, but we also provide a robust shortterm precoding algorithm which accounts for the intercoalition interference when spectrum sharing is applied between coalitions. Numerical simulations show that the distributed coalition formation is generally able to reach long-term sum throughputs within 10 % of the global optimum.
ETPL NW - 024
An Enhanced Available Bandwidth Estimation Technique for an End-toEnd Network Path
This paper presents a unique probing scheme, a rate adjustment algorithm, and a modified excursion detection algorithm (EDA) for estimating the available bandwidth (ABW) of an end-to-end network path more accurately and less intrusively. The proposed algorithm is based on the well known concept of self-induced congestion and it features a unique probing train structure in which there is a region where packets are sampled more frequently than in other regions. This high-density region enables our algorithm to find the turning point more accurately. When the dynamic ABW is outside of this region, we readjust the lower rate and upper rate of the packet stream to fit the dynamic ABW into that region.We appropriately adjust the range between the lower rate and the upper rate using spread factors, which enables us to keep the number of packets low and we are thus able to measure the ABW less intrusively. Finally, to detect the ABW from the one-way queuing delay, we present a modified EDA from PathChirps’ original EDA to better deal with sudden increase and decrease in queuing delays due to cross traffic burstiness. For the experiments, an Android OS-based device was used to measure the ABW over a commercial 4G/LTE mobile network of a Japanese mobile operator, as well as real testbed measurements were conducted over fixed and WLAN network. Simulations and experimental results show that our algorithm can achieve ABW estimations in real time and outperforms other stat-of-theart measurement algorithms in terms of accuracy, intrusiveness, and convergence time.
ETPL NW - 025
Botnet Detection based on Anomaly and Community Detection
We introduce a novel two-stage approach for the important cyber-security problem of detecting the presence of a botnet and identifying the compromised nodes (the bots), ideally before the botnet becomes active. The first stage detects anomalies by leveraging large deviations of an empirical distribution. We propose two approaches to create the empirical distribution: a flow-based approach estimating the histogram of quantized flows, and a graphbased approach estimating the degree distribution of node interaction graphs, encompassing both Erd˝os-R´enyi graphs and scale-free graphs. The second stage detects the bots using ideas from social network community detection in a graph that captures correlations of interactions among nodes over time. Community detection is done by maximizing a modularity measure in this graph. The modularity maximization problem is non-convex. We propose a convex relaxation, an effective randomization algorithm, and establish sharp bounds on the suboptimality gap. We apply our method to real-world botnet traffic and compare its performance with other methods.
ETPL NW - 026
Packet Classification Using Binary Content Addressable Memory
Packet classification is the core mechanism that enables many networking devices. Although using ternary content addressable memory (TCAM) to perform high-speed packet classification has become the widely adopted solution, TCAM is very expensive, has limited capacity, consumes large amounts of power, and generates tremendous amounts of heat because of their extremely dense and parallel circuitry. In this paper, we propose the first packet classification scheme that uses binary CAM (BCAM). BCAM is similar to TCAM except that in BCAM, every bit has only two possible states: 0 or 1; in contrast, in TCAM, every bit has three possible states: 0, 1, or * (don't care). Because of the high complexity in implementing the extra “don't care” state, TCAM has much higher circuit density than BCAM. As the power consumption, heat generation, and price grow non-linearly with circuit density, BCAM consumes much less power, generates much less heat, and costs much less money than TCAM. Our BCAM-based packet classification scheme is built on two key ideas. First, we break a multi-dimensional lookup into a series of 1-D lookups. Second, for each 1-D lookup, we convert the ternary matching problem into a binary string exact matching problem. To speed up the lookup process, we propose a number of optimization techniques, including skip lists, free expansion, minimizing maximum lookup time, minimizing average lookup time, and lookup short circuiting. We evaluated our BCAM scheme on 17 real-life packet classifiers. On these classifiers, our BCAM scheme requires roughly five times fewer CAM bits than the traditional TCAM-based scheme. The penalty is a throughput that is roughly four times less.
ETPL NW - 027
Overlay Automata and Algorithms for Fast and Scalable Regular Expression Matching
Packet classification is the core mechanism that enables many networking devices. Although using ternary content addressable memory (TCAM) to perform high-speed packet classification has become the widely adopted solution, TCAM is very expensive, has limited capacity, consumes large amounts of power, and generates tremendous amounts of heat because of their extremely dense and parallel circuitry. In this paper, we propose the first packet classification scheme that uses binary CAM (BCAM). BCAM is similar to TCAM except that in BCAM, every bit has only two possible states: 0 or 1; in contrast, in TCAM, every bit has three possible states: 0, 1, or * (don't care). Because of the high complexity in implementing the extra “don't care� state, TCAM has much higher circuit density than BCAM. As the power consumption, heat generation, and price grow non-linearly with circuit density, BCAM consumes much less power, generates much less heat, and costs much less money than TCAM. Our BCAM-based packet classification scheme is built on two key ideas. First, we break a multi-dimensional lookup into a series of 1-D lookups. Second, for each 1-D lookup, we convert the ternary matching problem into a binary string exact matching problem. To speed up the lookup process, we propose a number of optimization techniques, including skip lists, free expansion, minimizing maximum lookup time, minimizing average lookup time, and lookup short circuiting. We evaluated our BCAM scheme on 17 real-life packet classifiers. On these classifiers, our BCAM scheme requires roughly five times fewer CAM bits than the traditional TCAM-based scheme. The penalty is a throughput that is roughly four times less.
ETPL NW - 028
A General Collaborative Framework for Modeling and Perceiving Distributed Network Behaviour
Collaborative Anomaly Detection (CAD) is an emerging field of network security in both academia and industry. It has attracted a lot of attention, due to the limitations of traditional fortress-style defense modes. Even though a number of pioneer studies have been conducted in this area, few of them concern about the universality issue. This work focuses on two aspects of it. First, a unified collaborative detection framework is developed based on network virtualization technology. Its purpose is to provide a generic approach that can be applied to designing specific schemes for various application scenarios and objectives. Second, a general behavior perception model is proposed for the unified framework based on hidden Markov random field. Spatial Markovianity is introduced to model the spatial context of distributed network behavior and stochastic interaction among interconnected nodes. Algorithms are derived for parameter estimation, forward prediction, backward smooth, and the normality evaluation of both global network situation and local behavior. Numerical experiments using extensive simulations and several real datasets are presented to validate the proposed solution. Performancerelated issues and comparison with related works are discussed.
ETPL NW - 039
FluidNet: A Flexible Cloud-Based Radio Access Network for Small Cells
Cloud-based radio access networks (C-RAN) have been proposed as a cost-efficient way of deploying small cells. Unlike conventional RANs, a C-RAN decouples the baseband processing unit (BBU) from the remote radio head (RRH), allowing for centralized operation of BBUs and scalable deployment of light-weight RRHs as small cells. In this work, we argue that the intelligent configuration of the fronthaul network between the BBUs and RRHs, is essential in delivering the performance and energy benefits to the RAN and the BBU pool, respectively. We propose FluidNet-a scalable, light-weight framework for realizing the full potential of C-RAN. FluidNet deploys a logically re-configurable front-haul to apply appropriate transmission strategies in different parts of the network and hence cater effectively to both heterogeneous user profiles and dynamic traffic load patterns. FluidNet's algorithms determine configurations that maximize the traffic demand satisfied on the RAN, while simultaneously optimizing the compute resource usage in the BBU pool. We prototype FluidNet on a 6 BBU, 6 RRH WiMAX C-RAN testbed. Prototype evaluations and large-scale simulations reveal that FluidNet's ability to re-configure its front-haul and tailor transmission strategies provides a 50% improvement in satisfying traffic demands, while reducing the compute resource usage in the BBU pool by 50% compared to baseline schemes.
ETPL NW - 030
Multipath TCP: Analysis, Design, and Implementation
Multipath TCP (MP-TCP) has the potential to greatly improve application performance by using multiple paths transparently. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate our algorithm Balia (balanced linked adaptation), which generalizes existing algorithms and strikes a good balance among TCPfriendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We userototype to compare the new algorithm to existing MP-TCP algorithms.
ETPL NW - 031
RSkNN: kNN Search on Road Networks by Incorporating Social Influence
Although $k$ NN search on a road network $G_r$ , i.e., finding $k$ nearest objects to a query user $q$ on $G_r$ , has been extensively studied, existing works neglected the fact that the $q$ 's social information can play an important role in this $k$ NN query. Many real-world applications, such as location-based social networking services, require such a query. In this paper, we study a new problem: $k$ NN search on road networks by incorporating social influence (RSkNN). Specifically, the stateof-the-art Independent Cascade (IC) model in social network is applied to de- ine social influence. One critical challenge of the problem is to speed up the computation of the social influence over large road and social networks. To address this challenge, we propose three efficient index-based search algorithms, i.e., road network-based (RN-based), social network-based (SN-based), and hybrid indexing algorithms. In the RN-based algorithm, we employ a filtering-and-verification framework for tackling the hard problem of computing social influence. In the SN-based algorithm, we embed social cuts into the index, so that we speed up the query. In the hybrid algorithm, we propose an index, summarizing the road and social networks, based on which we can obtain query answers efficiently. Finally, we use real road and social network data to empirically verify the efficiency and efficacy of our solutions.