Publications
2017
A Byzantine Fault-tolerant Ordering Service for the Hyperledger Fabric Blockchain Platform
João Sousa, Alysson Bessani, Marko Vukolic, SERIAL’17. Las Vegas, USA, December 2017
Abstract: We briefly describe the preliminary work on the design, implementation and evaluation of a Byzantine-fault tolerantordering service for the Hyperledger Fabric Blockchain platform using the BFT-SMaRt replication library.
GINJA: One-dollar Cloud-based Disaster Recovery for Databases
Joel Alcântara, Tiago Oliveira, Alysson Bessani, Proceedings of the 2017 ACM/IFIP/USENIX Middleware Conference — Middleware’17. Las Vegas, NV, USA. December 2017.
Abstract: Disaster Recovery (DR) is a crucial feature to ensure availability and data protection in modern information systems. A common DR approach requires the replication of services in a set of virtual machines running in the cloud as backups. This leads to considerable monetary costs and managing efforts to keep such cloud VMs. We present GINJA, a DR solution for transactional database management systems (DBMS) that uses only cloud storage services such as Amazon S3. GINJA works at file-system level to efficiently capture and replicate data updates to a remote cloud storage service, achieving three important goals: (1) reduces the costs for maintaining a cloud-based DR to less than one dollar per month for relevant databases’ sizes and workloads (up to 222× less than the traditional approach of having a DBMS replica in a cloud VM); (2) allows a precise control of the operational costs, durability and performance trade-offs; and (3) introduces a small performance overhead to the DBMS (e.g., less than 5% overhead for the TPC-C workload with ≈ 10 seconds of data loss in case of disasters).
The KISS principle in Software-Defined Networking: a framework for secure communications
Diego Kreutz, Jiangshan Yu, Paulo Esteves-Verissimo, Catia Magalhaes, Fernando M. V. Ramos, IEEE Security and Privacy. October 2017.
Abstract: Security is an increasingly fundamental requirement in Software-Defined Networking (SDN). However, the pace of adoption of secure mechanisms has been slow, which we estimate to be a consequence of the performance overhead of traditional solutions and of the complexity of their support infrastructure. To address these challenges we propose KISS, a secure SDN control plane communications architecture that includes innovative solutions in the context of key distribution and secure channel support. Core to our contribution is the integrated device verification value (iDVV), a deterministic but indistinguishablefrom-random secret code generation protocol that allows local but synchronized generation/verification of keys at both ends of the control channel, even on a per-message basis. We show that our solution, while offering the same security properties, outperforms reference alternatives, with performance improvements up to 30% over OpenSSL, and improvement in robustness based on a code footprint one order of magnitude smaller.
On the Design of Resilient Multicloud MapReduce
Pedro Costa, Miguel Correia, Fernando Ramos, IEEE Cloud Computing. October 2017.
Abstract: MapReduce is a popular distributed data-processing system for analyzing big data in cloud environments. This platform is often used for critical data processing, e.g., in the context of scientific or financial simulation. Unfortunately, there is accumulating evidence of severe problems - including arbitrary faults and cloud outages - affecting the services that run atop cloud services. Faced with this challenge, we have recently explored multicloud solutions to increase the resilience and availability of MapReduce. Based on this experience, we present system design guidelines that allow to scale out MapReduce computation to multiple clouds in order to tolerate arbitrary and malicious faults, as well as cloud outages. Crucially, the techniques we introduce have reasonable cost and do not require changes to MapReduce or to the users’ code, enabling immediate deployment.
Enabling Trust Assessment In Clouds-of-Clouds: A Similarity-Based Approach
Reda Yaich, Nora Cuppens and Frédéric Cuppens, ARES 2017 (International Conference on Availability, Reliability and Security). August 2017.
Abstract: In multi-cloud paradigm, cloud providers collaborate to form adhoc and ephemeral groups to fullfill the request of a single customer. In such settings, malevolent cloud providers may be tempted to provide cloud services that are below the expected quality. This temptation is further exacerbated by the inability of customers to effectively identify the responsible of service outage or degradation.
Furthermore, the highly competitive nature of cloud marketplaces leads each provider to propose regularly innovative new services, making the system open and highly dynamic. The introduction of new cloud services into the system challenges the established trust order as customers and providers must accept the risk of taking decisions under uncertainty. This problem, known as the cold-start problem, have been studied in the literature from the perspective of the individuals (providers/customers) but to the best of our knowledge, no prior work tried to address it from the perspective of the exchanged services and resources. To that aim, we propose in this paper a similarity-based trust model that tackles both multi-cloud (i.e., group-reputation) and services high turnover (i.e., cold-start). In our model, past similar experiences are transferred to the providers proposing new services to enable and boost decision making and collaboration. We propose also a schema to derive multi-cloud trust using both customers and providers feedback experiences. We present also evaluations results to show the benefit of using our proposal and their impact on the simulated cloud-marketplace.
Firewall Policies Provisioning Through SDN in the Cloud
Nora Cuppens, Salaheddine Zerkane, Yanhuang Li, David Espes, Philippe Laparc, Frédéric Cuppens, 31st Annual IFIP WG 11.3 Conference on Data and Applications Security and Privacy (DBSec'17), Philadelphia, USA, July 2017.
Abstract: The evolution of the digital world drives cloud computing to be a key infrastructure for data and services. This breakthrough is transforming Software Defined Networking into the cloud infrastructure backbone because of its advantages such as programmability, abstraction and flexibility. As a result, many cloud providers select SDN as a cloud network service and offer it to their customers. However, due to the rising number of network cloud providers and their security offers, network cloud customers strive to find the best provider candidate who satisfies their security requirements. In this context, we propose a negotiation and an enforcement framework for SDN firewall policies provisioning. Our solution enables customers and SDN providers to express their firewall policies and to negotiate them via an orchestrator. Then, it reinforces these security requirements using the holistic view of the SDN controllers and it deploys the generated firewall rules into the network elements. We evaluate the performance of the solution and demonstrate its advantages.
Secure Tera-scale Data Crunching with a Small TCB
Bruno Vavala, Nuno Neves, Peter Steenkiste, Proceedings of the International Conference on Dependable Systems and Networks (DSN), Denver, USA, June 2017.
Abstract: Outsourcing services to third-party providers comes with a high security cost—to fully trust the providers. Using trusted hardware can help, but current trusted execution environments do not adequately support services that process very large scale datasets. We present LASTGT, a system that bridges this gap by supporting the execution of self-contained services over a large state, with a small and generic trusted computing base (TCB). LASTGT uses widely deployed trusted hardware to guarantee integrity and verifiability of the execution on a remote platform, and it securely supplies data to the service through simple techniques based on virtual memory. As a result, LASTGT is general and applicable to many scenarios such as computational genomics and databases, as we show in our experimental evaluation based on an implementation of LASTGT on a secure hypervisor. We also describe a possible implementation on Intel SGX.
Mantus: Putting Aspects to Work for Flexible Multi-Cloud Deployment
Alex Palesandro, Marc Lacoste, Nadia Bennani, Chirine Ghedira Guegan, Denis Bourge, 10th IEEE International Conference on Cloud Computing (CLOUD), Hawaii, USA, June 2017.
Abstract: Cloud provider barriers still stand. After a decade of cloud computing, customers struggle to overcome the challenge of crossing multi-provider clouds to benefit from fine-grained resource distribution, business independence from CSPs and cost savings. Although increasingly popular, most adopted IaaS intercloud solutions are generally limited to specific public cloud providers or present maintainability issues. Remaining hurdles include complexity of management and operations of such infrastructures, in presence of per-customer customizations and provider configurations. The Infrastructure as Code (IaC) paradigm is emerging as key enabler for IaaS multi-clouds, to develop and manage infrastructure configurations. However, due to complexity of the infrastructure life-cycle, to heterogeneity of composing resources and to user-customizations, this approach is far from being viable. In this paper, we explore an aspect-oriented approach to IaC deployment and management. We propose Mantus, a IaC-based multi-cloud builder composed of an aspectoriented Domain-Specific Language called TML, or TOSCA Manipulation Language, and a corresponding aspect weaver to inject flexibly non-functional services in TOSCA infrastructure templates. We show the practical feasibility of our approach, with also good results in terms of performance and scalability.
Chrysaor: Fine-Grained, Fault-Tolerant Cloud-of-Clouds MapReduce
Pedro A. R. S. Costa, Fernando M. V. Ramos, Miguel Correia, IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), Madrid, Spain, May 2017
Abstract: MapReduce is a framework for processing largedata sets much used in the context of cloud computing. MapRe-duce implementations like Hadoop can tolerate crashes andfile corruptions, but not arbitrary faults. Unfortunately, thereis evidence that arbitrary faults do occur and can affect thecorrectness of MapReduce job executions. Furthermore, manyoutages of major cloud offerings have been reported, raisingconcerns about the dependence on a single cloud.In this paper we propose a novel execution system that allowsto scale out MapReduce computations to a cloud-of-clouds andtolerate arbitrary faults, malicious faults, and cloud outages. Oursystem, Chrysaor, is based on a fine-grained replication schemethat tolerates faults at the task level. Our solution has threeimportant properties: it tolerates the above-mentioned classes offaults at reasonable cost; it requires minimal modifications tothe users’ applications; and it does not involve changes to theHadoop source code. We performed an extensive evaluation of oursystem in Amazon EC2, showing that our fine-grained solution isefficient in terms of computation by recovering only faulty tasks.This is achieved without incurring a significant penalty for thebaseline case (i.e., without faults) in most workload
Secure and Dependable Multi-Cloud Network Virtualization
Max Alaluna, Eric Vial, Nuno Neves, Fernando Ramos, EuroSys 1st International Workshop on Security and Dependability of Multi-Domain Infrastructures (XDOM0), Belgrade, Serbia, April 2017
Abstract: Existing multi-tenant network virtualization platformshave so far focused on the offer of conventional network-ing services by a single cloud provider. As such, they facelimitations in terms of security and dependability, both interms of the infrastructure itself and of the services offeredto its customers. To address these challenges we presentthe design and implementation of Sirius, a network virtu-alization platform for multi-cloud environments. Contraryto existing solutions, Sirius considers not only connectiv-ity and performance, but also security and dependability asfirst class citizens, leveraging from a substrate infrastructurecomposed of both public clouds and private data centers.
SDN-based Dynamic and Adaptive Policy Management System to Mitigate DDoS Attacks
Rishikesh Sahay, Gregory Blanc, Zonghua Zhang, Khalifa Toumi, Hervé Debar, EuroSys 1st International Workshop on Security and Dependability of Multi-Domain Infrastructures (XDOM0), Belgrade, Serbia, April 2017
Abstract: This paper presents a dynamic policy enforcement mechanism that allows ISPs to specify security policies to mitigate the impact of network attacks by taking into account the specific requirements of their customers. The proposed policy-based management framework leverages the recent Software-Defined Networking (SDN) technology to provide a centralized platform that allows network administrators to define global network and security policies, which are then enforced directly to the OpenFlow switches. One of the major objectives of such a framework is to achieve fine-grained and automated attack mitigation in the ISP network, ultimately reducing the impact of attack and collateral damage to the customer networks. To evaluate the feasibility and effectiveness of framework, we develop a prototype that serves for one ISP and three customers. The experimental results demonstrate that our framework can successfully reduce the collateral damage on a customer network caused by the attack traffic targeting another customer network. More interestingly, the framework can provide rapid response and mitigate the attack in a very short time.
Somewhat/Fully Homomorphic Encrytion: implementation progress and challenges
Guillaume Bonnoron, Caroline Fontaine, Guy Gogniat, Vincent Herbert, Vianney Lapôtre, Vincent Migliore, Adeline Roux-Langlois, 2nd International Conference in honor of Professor Claude Carlet, Rabat, Morocco, April 2017
Abstract: The proposed article aims, for readers, to learn about the existing exports to secure and implement Somewhat/Fully Homomorphic Encryption ( (S/F)HE ) schemes and the problems to be tackled in order to progress toward their adoption. For that purpose, the article provides, at first, a brief introduction regarding (S/F)HE. Then, it focuses on some practical issues related to the adoption of (S/F)HE schemes, i.e. the security parameters, the existing implementations and their limitations, and the management of the huge complexity caused by homomorphic calculation. These issues are analyzed with the help of recent related
work published in the literature, and with the experience gained by the authors through their experiments.
Rethinking Permissioned Blockchains
M. Vukolic, BCC 2017 : The First ACM Workshop on Blockchain, Cryptocurrencies and Contracts (BCC’17), Abu Dhabi, UAE, April 2017
Abstract: Current blockchain platforms, especially the recent permissioned systems, have architectural limitations: smart contracts run sequentially, all node executes all smart contracts, consensus protocols are hard-coded, the trust model is static and not exible, and non-determinism in smart-contract execution poses serious problems. Overcoming these limitations is critical for improving both functional properties of blockchains, such as con dentiality and consistency, as well as their non-functional properties, such as performance and scalability. We discuss these limitations in the context of permissioned blockchains, including an early version of the Hyperledger Fabric blockchain platform, and how a re-design of Hyperledger Fabric's architecture addresses them.
Secure Virtual Network Embedding in a Multi-Cloud Environment
Max Alaluna, Luís Ferrolho, Jose Rui Figueira, Nuno Neves, Fernando M. V. Ramos, arXiv.org, 03 March 2017
Abstract: Recently-proposed virtualization platforms give cloudusers the freedom to specify their network topologies and ad-dressing schemes. These platforms have, however, been targetinga single datacenter of a cloud provider, which is insufficient tosupport (critical) applications that need to be deployed acrossmultiple trust domains while enforcing diverse security require-ments. This paper addresses this problem by presenting a novelsolution for a central component of network virtualization –the online network embedding, which finds efficient mappings ofvirtual networks requests onto the substrate network. Our solutionconsiders security as a first class citizen, enabling the definition offlexible policies in three central areas: on the communications,where alternative security compromises can be explored (e.g.,encryption); on the computations, supporting redundancy if nec-essary while capitalizing on hardware assisted trusted executions;across multiples clouds, including public and private facilities,with the associated trust levels. We formulate the solution as aMixed Integer Linear Program (MILP), and evaluate our proposalagainst the most commonly used alternative. Our analysis givesinsight into the trade-offs involved with the inclusion of securityand trust into network virtualization, providing evidence that thisnotion may enhance profits under the appropriate cost model
Elastic State Machine Replication
Andre Nogueira, Antonio Casimiro, Alysson Bessani, IEEE Transactions on Parallel and Distributed Systems (IEEE T PARALL DISTR), March 2017
2016
Non-determinism in Byzantine Fault-Tolerant Replication
Christian Cachin, Simon Schubert, Marko Vukolic, 20th International Conference On Principles Of DIstributed Systems (OPODIS’16), Madrid, Spain
Abstract: Service replication distributes an application over many processes for tolerating faults, attacks, and misbehavior among a subset of the processes. With the recent interest in blockchain technologies, distributed execution of one logical application has become a prominent topic. The established state-machine replication paradigm inherently requires the application to be deterministic. This paper distinguishes three models for dealing with non-determinism in replicated services, where some processes are subject to faults and arbitrary behavior (so-called Byzantine faults): first, the modular case that does not require any changes to the potentially non-deterministic application (and neither access to its internal data); second, master-slave solutions, where ties are broken by a leader and the other processes validate the choices of the leader; and finally, applications that use cryptography and secret keys. Cryptographic operations and secrets must be treated specially because they require strong randomness to satisfy their goals. The paper also introduces two new protocols. First, Protocol Sieve uses the modular approach and filters out non-deterministic operations in an application. It ensures that all correct processes produce the same outputs and that their internal states do not diverge. A second protocol, called Mastercrypt, implements cryptographically secure randomness generation with a verifiable random function and is appropriate for most situations in which cryptographic secrets are involved. All protocols are described in a generic way and do not assume a particular implementation of the underlying consensus primitive.
Usage Control Policy Enforcement in SDN-based Clouds: A Dynamic Availability Service Use Case
Khalifa Toumi, Muhammad Idrees Sabir, Fabien Charmet, Reda Yaich, Gregory Blanc,
the 18th IEEE International Conference on High Performance Computing and Communications (HPCC 2016), Sydney, Australia
Abstract: With the growing interest in Software Defined Networking (SDN) and thanks to the programmability provided by SDN protocols like OpenFlow, network application developers have started implementing Solutions to fit corporate needs, like firewalls, load balancers and security services. In this paper, we present a novel solution to answer those needs with usage control policies. We design a policy based management framework offering SDN network security policies. This approach is used to enforce Performance requirements (e.g., to ensure a certain level of network connectivity). A top-down approach is proposed, in order to refine the policies into the appropriate Network rules, via the OpenFlow protocol. Finally, we implement the solution with an availability service use case and we provide a set of experiments to evaluate its efficiency.
Hardware/Software co-Design of an Accelerator for FV Homomorphic Encryption Scheme using Karatsuba Algorithm
Vincent Migliore, Maria Méndez Real, Vianney Lapotre, Arnaud Tisserand, Caroline Fontaine, Guy Gogniat, IEEE Transactions on Computers, 2016
Abstract: Somewhat Homomorphic Encryption (SHE) schemes allow to carry out operations on data in the cipher domain. In a cloud computing scenario, personal information can be processed secretly, inferring a high level of confidentiality. For many years, practical parameters of SHE schemes were overestimated, leading to only consider the FFT algorithm to accelerate SHE in hardware. Nevertheless, recent work demonstrates that parameters can be lowered without compromising the security [1]. Following this trend, this work investigates the benefits of using Karatsuba algorithm instead of FFT for the Fan-Vercauteren (FV) Homomorphic Encryption scheme. The proposed accelerator relies on an hardware/software co-design approach, and is designed to perform fast arithmetic operations on degree 2560 polynomials with 135 bits coefficients, allowing to compute small algorithms homomorphically. Compared to a functionally equivalent design using FFT, our accelerator performs an homomorphic multiplication in 11.9 ms instead of 15.46ms, and halves the size of logic utilization and registers on the FPGA.
Overcoming Barriers for Ubiquitous User- Centric Healthcare Services
A. Palesandro, C. G. Guegan, M. Lacoste, N. Bennani, IEEE Cloud Computing, Special Issue on Cloud Computing for Enhanced Living Environments, 2016.
Abstract: The cloud model is rapidly evolving, with maturing intercloud architectures and progressive integration of sparse, geodistributed resources into large datacenters. The single-provider administrative barrier is also increasingly crossed by applications, allowing new verticals to benefit from the multicloud model. For instance, in home healthcare systems, transparent usage of resources from multiple providers enables "follow-me" scenarios, where healthcare services are accessible anywhere, anytime, with quality-of-service (QoS) guarantees. However, transparency might be at odds with security and jurisdictions, imposing restrictions on where data and applications might be stored and run. Existing intercloud approaches either disrupt application deployment mechanisms or compromise infrastructure homogeneity, making enforcing a uniform QoS level more complex, notably for protection. This article introduces Orchestration for beyond Intercloud Security (Orbits), an infrastructure-as-a-service-level architecture that enables flexible and legacy intercloud application deployment for mobile remote healing, while providing a homogeneous service abstraction across multiple clouds. The authors also present a work-in-progress prototype and several benchmarks to demonstrate the viability of the approach and highlight key implementation choices.
Exploring Key-Value Stores in Multi-Writer Byzantine-Resilient Register Emulations
T. Oliveira, R. Mendes and A. Bessani, 20th International Conference On Principles Of DIstributed Systems (OPODIS’16), Madrid, Spain, December 2016.
Abstract: Resilient register emulation is a fundamental technique to implement dependable storage and distributed systems. In data-centric models, where servers are modeled as fail-prone base objects, classical solutions achieve resilience by using fault-tolerant quorums of read-write registers or read-modify-write objects. Recently, this model has attracted renewed interest due to the popularity of cloud storage providers (e.g., Amazon S3, Google Storage, Microsoft Azure Storage), that can be modeled as key-value stores (KVSs) and combined for providing secure and dependable multi-cloud storage services. In this paper we present three novel wait-free multi-writer multi-reader regular register emulations on top of Byzantine-prone KVSs. We implemented and evaluated these constructions using ve existing cloud storage services and show that their performance matches or surpasses existing data-centric register emulations.
Constant-Size Ciphertext Attribute-based Encryption from Multi-Channel Broadcast Encryption
S. Canard, C. Trinh, Twelth International Conference on Information Systems Security (ICISS 2016), Jaipur, India, December 2016.
Abstract: Attribute-based encryption (ABE) is an extension of traditional public key encryption in which the encryption and decryption
phases are based on user's attributes. More precisely, we focus on cipher text-policy ABE (CP-ABE) where the secret-key is associated to a set of attributes and the ciphertext is generated with an access policy. It then becomes feasible to decrypt a ciphertext only if one's attributes satisfy the used access policy. CP-ABE scheme with constant-size ciphertext supporting fine-grained access control has been investigated at AsiaCrypt'15 and then at TCC'16. The former makes use of the conversion technique between ABE and spatial encryption, and the later studies the pair encodings framework. In this paper, we give a new approach to construct such kind of CP-ABE scheme. More precisely, we propose private CP-ABE schemes with constant-size ciphertext, supporting CNF (Conjunctive Normal Form) access policy, with the simple restriction that each attribute can only appear kmax times in the access formula. Our two constructions are based on the BGW scheme at Crypto'05. The rst scheme is basic selective secure (in the standard model) while our second one reaches the selective CCA security (in the random oracle model).
Verifiable Message-Locked Encryption
S. Canard, F. Laguillaumie, M. Paindavoine, 15th International Conference on Cryptology and Network Security (CANS 2016), Milan, Italy, November 2016.
Abstract: One of today's main challenge related to cloud storage is to maintain the functionalities and the efficiency of customers' and service
providers' usual environments, while protecting the confidentiality of sensitive data. Deduplication is one of those functionalities: it enables cloud storage providers to save a lot of memory by storing only once a le uploaded several times. But classical encryption blocks deduplication. One needs to use a \message-locked encryption" (MLE), which allows the detection of duplicates and the storage of only one encrypted le on the server, which can be decrypted by any owner of the le. However, in most existing scheme, a user can bypass this deduplication protocol. In this article, we provide servers verifiability for MLE schemes: the servers can verify that the ciphertexts are well-formed. This property that we formally define forces a customer to prove that she complied to the deduplication protocol, thus preventing her to deviate from the prescribed functionality of MLE. We call it deduplication consistency. To achieve this deduplication consistency, we provide (i) a generic transformation that applies to any MLE scheme and (ii) an ElGamal-based deduplication-consistent MLE, which is secure in the random oracle model.
User-Centric Security and Dependability in Cloud of Clouds
M. Lacoste, M. Miettinen, N. Neves, F. Ramos, M. Vukolic, F. Charmet, R. Yaich, K. Obornzynski, G. Vernekar, P. Sousa, IEEE Cloud Computing, Special Issue on Cloud Security, 2016.
Abstract: A promising vision of distributed cloud computing is a unified world of multiple clouds, with business benefits at hand. In practice, lack of interoperability among clouds and management complexity raise many security and dependability concerns. The authors propose secure Supercloud computing as a new paradigm for security and dependability management of distributed clouds. Supercloud follows a user-centric and self-managed approach to avoid technology and vendor lock-ins. In Supercloud, users define U-Clouds, which are isolated sets of computation, data, and networking services run over both private and public clouds operated by multiple providers, with customized security requirements as well as self-management for reducing administration complexity. The article presents the Supercloud architecture with a focus on its security infrastructure. The authors illustrate through several use cases the practical applicability of the Supercloud paradigm.
XFT: Practical Fault Tolerance Beyond Crashes
S. Liu, P. Viotti, C. Cachin, V. Quéma, M. Vukolic, 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2016), Savannah, GA, USA, November 2016.
Abstract: Despite years of intensive research, Byzantine faulttolerant (BFT) systems have not yet been adopted in practice. This is due to additional cost of BFT in terms of resources, protocol complexity and performance, compared with crash fault-tolerance (CFT). This overhead of BFT comes from the assumption of a powerful adversary that can fully control not only the Byzantine faulty machines, but at the same time also the message delivery schedule across the entire network, effectively inducing communication asynchrony and partitioning otherwise correct machines at will. To many practitioners, however, such strong attacks appear irrelevant. In this paper, we introduce cross fault tolerance or XFT, a novel approach to building reliable and secure distributed systems and apply it to the classical state-machine replication (SMR) problem. In short, an XFT SMR protocol provides the reliability guarantees of widely used asynchronous CFT SMR protocols such as Paxos and Raft, but also tolerates Byzantine faults in combination with network asynchrony, as long as a majority of replicas are correct and communicate synchronously. This allows the development of XFT systems at the price of CFT (already paid for in practice), yet with strictly stronger resilience than CFT — sometimes even stronger than BFT itself. As a showcase for XFT, we present XPaxos, the first XFT SMR protocol. Although it offers much stronger resilience than CFT SMR at no extra resource cost, the performance of XPaxos matches that of the state-of-the art CFT protocols.
PhishEye: Live Monitoring of Sandboxed Phishing Kits
X. Han, N. Kheir, D. Balzarotti, 23rd ACM conference on Computer and Communications Security (CCS), Vienna, Austria, October 2016.
Abstract: Phishing is a form of online identity theft that deceives unaware users into disclosing their confidential information. While significant e ort has been devoted to the mitigation of phishing attacks, much less is known about the entire life-cycle of these attacks in the wild, which constitutes, however, a main step toward devising comprehensive anti-phishing techniques. In this paper, we present a novel approach to sandbox live phishing kits that completely protects the privacy of victims. By using this technique, we perform a comprehensive real-world assessment of phishing attacks, their mechanisms, and the behavior of the criminals, their victims, and the security community involved in the process based on data collected over a period of five months. Our infrastructure allowed us to draw the first comprehensive picture of a phishing attack, from the time in which the attacker installs and tests the phishing pages on a compromised host, until the last interaction with real victims and with security researchers. Our study presents accurate measurements of the duration and effectiveness of this popular threat, and discusses many new and interesting aspects we observed by monitoring hundreds of phishing campaigns.
A Novel Proof of Data Possession Scheme based on Set-Homomorphic Operations
N. Kaaniche, M. Laurent, S. Canard, 2nd Workshop on Security in Clouds (SEC2 2016), Lorient, France, July 2016.
Abstract: The prospect of outsourcing an increasing amount of data to a third party and the abstract nature of the cloud promote the proliferation of security and privacy challenges, namely, the remote data possession checking. This work addresses this security concern, while supporting the verification of several data blocks outsourced across multiple storing nodes. We propose a set homomorphic proof of data possession, called SHoPS, supporting the verification of aggregated proofs. It proposes a deterministic Proof of Data Possession (PDP) scheme based on interactive proof protocols. Our approach has several advantages. First, it supports public verifiability where the data owner delegates the verification process to another entity, thus releasing him from the burden of periodical verifications. Second, it allows the aggregation of several proofs and the verification of a subset of data files’ proofs while providing an attractive communication overhead.
Verifiable Message-Locked Encryption
S. Canard, F. Laguillaumie, M. Paindavoine, 2nd Workshop on Security in Clouds (SEC2 2016), Lorient, France, July 2016.
Abstract: One of today’s main challenge related to cloud storage is to maintain the functionalities and the efficiency of customers’ and service providers’ usual environments while protecting the confidentiality of sensitive data. Deduplication is one of those functionalities: it enables cloud storage providers to save a lot of memory by storing only once a file uploaded several times. However, classical encryption schemes block deduplication. One needs to use a “message-locked encryption” scheme (MLE), which allows the detection of duplicates and the storage of only one encrypted file on the server, which can be decrypted by any owner of the file. However, in most existing scheme, a user can bypass this deduplication protocol. In this article, we provide servers verifiability for MLE schemes: the servers can verify that the ciphertexts are well-formed. This property forces a customer to prove that she complied to the deduplication protocol, thus preventing her to deviate from the prescribed functionality of MLE. Then, we provide an MLE scheme satisfying this new security property. To achieve the deduplication consistency, our construction primarily relies on zero-knowledge proofs. Unlike Abadi et al.’s MLE, we instantiate those proofs, so that we obtain a more efficient scheme, secure in the random oracle model.
Towards Management of Chains of Trust for Multi-Clouds with Intel SGX
H. Kanzari, Marc Lacoste, 2nd Workshop on Security in Clouds (SEC2 2016), Lorient, France, July 2016.
Abstract: In multi-cloud infrastructures, despite the great diversity of current isolation technologies, a federating model to manage trust across layers or domains is still missing. Attempts to formalize trust establishment through horizontal and vertical Chains of Trust (CoTs) still lack a precise supporting technology. This paper is a first step towards reconciling the two standpoints towards a broader trust management framework. We consider the horizontal, single-layer case, focusing on Intel SGX as promising isolation technology. We propose a protocol for establishing trust between a chain of Intel SGX enclaves, both when they are located on the same and on remote platforms. Preliminary evaluation of an OpenSGX implementation shows our protocols present encouraging scalability results.
Access open research data.
Expression and Enforcement of Security Policy for Virtual Resource Allocation in IaaS Cloud
Y. Li, N. Cuppens-Boulahia, J.M. Crom, F.Cuppens, V.Frey, 31st International Conference on ICT Systems Security and Privacy Protection (IFIP SEC), Ghent, Belgium, May 2016.
Abstract: Many research works focus on the adoption of cloud infrastructure as a service(IaaS), where virtual machines (VM) are deployed on multiple cloud service providers (CSP). In terms of virtual resource allocation driven by security requirements, most of proposals take the aspect of cloud client into account but do not address such requirements from CSP. Besides, it is a shared understanding that using a formal policy model to support the expression of security requirements can drastically ease the cloud resource management and conflict resolution. To address these theoretical limitations, our work is based on a formal model that applies organization-based access control (OrBAC) policy to IaaS resource allocation. In this paper, we first integrate the attribute-based security requirements in service level agreement (SLA) contract. After transformation, the security requirements are expressed by OrBAC rules and these rules are considered together with other non-security demands during the enforcement of resource allocation. We have implemented a prototype for VM scheduling in OpenStack-based multi-cloud environment and evaluated its performance.
Access open research data.
Certificate Validation in Secure Computation and Its Use in Verifiable Linear Programming
S.d.Hoogh, B.Schoenmakers, M. Veeningen, Proceedings AFRICACRYPT 2016: International Conference on Cryptology, Morocco, Africa, April 2016.
Abstract: For many applications of secure multiparty computation it is natural to demand that the output of the protocol is verifiable. Verifiability should ensure that incorrect outputs are always rejected, even if all parties executing the secure computation collude. Since the inputs to a secure computation are private, and potentially the outputs are private as well, adding verifiability is in general hard and costly. In this paper we focus on privacy-preserving linear programming as a typical and practically relevant case for verifiable secure multiparty computation. We introduce certificate validation as an effective technique for achieving verifiable linear programming. Rather than verifying the computation proper, which involves many iterations of the simplex algorithm, we extend the output of the secure computation with a certificate. The certificate allows for efficient and direct validation of the correctness of the output. The overhead incurred by the computation of the certificate is marginal. For the validation of a certificate we design particularly efficient distributed-prover zero-knowledge proofs, fully exploiting the fact that we can use ElGamal encryption for this purpose, hence avoiding the use of more elaborate cryptosystems such as Paillier encryption. We also formulate appropriate security definitions for our approach, and prove security for our protocols in this model, paying special attention to ensuring properties such as input independence. By means of several experiments performed in a real multi-cloud-provider environment, we show that the overall performance for verifiable linear programming is very competitive, incurring minimal overhead compared to protocols providing no correctness guarantees at all.
Access open research data.
Knowledge Connectivity Requirements for Solving Byzantine Consensus with Unknown Participants
E.A.P. Alchieri, A. Bessani, F. Greve, J. da Silva Fraga, IEEE Transactions on Dependable and Secure Computing, March 2016.
Abstract: Consensus is a fundamental building block to solve many practical problems that appear on reliable distributed systems. In spite of the fact that consensus is being widely studied in the context of standard networks, few studies have been conducted in order to solve it in dynamic and self-organizing systems characterized by unknown networks. While in a standard network the set of participants is static and known, in an unknown network, such set and number of participants are previously unknown. This work studies the problem of Byzantine Fault-Tolerant Consensus with Unknown Participants, namely BFT-CUP. This new problem aims at solving consensus in unknown networks with the additional requirement that participants in the system may behave maliciously. It presents the necessary and sufficient knowledge connectivity conditions in order to solve BFT-CUP under minimal synchrony requirements. In this way, it proposes algorithms that are shown to be optimal in terms of synchrony and knowledge connectivity among participants in the system.
(Literally) above the clouds: virtualizing the network over multiple clouds
M. Alaluna, F. M. V. Ramos, N. Neves, IEEE Conference on Network Softwarization (NetSoft), Seoul, Korea, March 2016.
Abstract: Recent SDN-based solutions give cloud providers the opportunity to extend their" as-a-service" model with the offer of complete network virtualization. They provide tenants with the freedom to specify the network topologies and addressing schemes of their choosing, while guaranteeing the required level of isolation among them. These platforms, however, have been targeting the datacenter of a single cloud provider with full control over the infrastructure.
Access open research data.
Consensus in a Box: Inexpensive Coordination in Hardware
Z. István, D. Sidler, G. Alonso, M. Vukolić, NSDI 16: 13th USENIX Symposium on Networked Systems Design and Implementation, Santa Clara, CA, USA, March 2016.
Abstract: Consensus mechanisms for ensuring consistency are some of the most expensive operations in managing large amounts of data. Often, there is a trade off that involves reducing the oordination overhead at the price of accepting possible data loss or inconsistencies. As the demand for more efficient data centers increases, it is important to provide better ways of ensuring consistency without affecting performance. In this paper we show that consensus (atomic broadcast) can be removed from the critical path of performance by moving it to hardware. As a proof of concept, we implement Zookeeper’s atomic broadcast at the network level using an FPGA. Our design uses both TCP and an application specific network protocol. The design can be used to push more value into the network, e.g., by extending the functionality of middleboxes or adding inexpensive consensus to in-network processing nodes. To illustrate how this hardware consensus can be used in practical systems, we have combined it with a mainmemory key value store running on specialized microservers (built as well on FPGAs). This results in a distributed service similar to Zookeeper that exhibits high and stable performance. This work can be used as a blueprint for further specialized designs.
Trinocchio: Privacy-Preserving Outsourcing by Distributed Verifiable Computation
B. Schoenmakers, M. Veeningen, N. de Vreede, Proceedings ACNS 2016, London, UK, January 2016.
Abstract: Verifiable computation allows a client to outsource computations to a worker with a cryptographic proof of correctness of the result that can be verified faster than performing the computation. Recently, the Pinocchio system achieved faster verification than computation in practice for the first time. Unfortunately, Pinocchio and other efficient verifiable computation systems require the client to disclose the inputs to the worker, which is undesirable for sensitive inputs. To solve this problem, we propose Trinocchio: a system that distributes Pinocchio to three (or more) workers, that each individually do not learn which inputs they are computing on. Each worker essentially performs the work for a single Pinocchio proof; verification by the client remains the same. Moreover, we extend Trinocchio to enable joint computation with multiple mutually distrusting inputters and outputters and still very fast verification. We show the feasibility of our approach by analysing the performance of an implementation in two case studies.
Access open research data.
2015
Similarity Measure for Security Policies in Service Provider Selection
Y. Li, N. Cuppens-Boulahia, J.-M. Crom, F. Cuppens, V. Frey, X. Ji, 11th International Conference on Information Systems Security (ICISS2015), Kolkata, India, December 2015.
Abstract: The interaction between different applications and services requires expressing their security properties. This is typically defined as security policies, which aim at specifying the diverse privileges of different actors. Today similarity measure for comparing security policies becomes a crucial technique in a variety of scenarios, such as finding the cloud service providers which satisfy client's security concerns. Existing approaches cover from semantic to numerical dimensions and the main work focuses mainly on XACML policies. However, few efforts have been made to extend the measure approach to multiple policy models and apply it to concrete scenarios. In this paper, we propose a generic and light-weight method to compare and evaluate security policies belonging to different models. Our technique enables client to quickly locate service providers with potentially similar policies. Comparing with other works, our approach takes policy elements' logic relationships into account and the experiment.
Access open research data.
Towards User-Centric Management of Security and Dependability in Clouds of Clouds
M. Lacoste, F. Charmet, 6th International Conference on E-Democracy, Athens, Greece, December 2015.
Abstract: SUPERCLOUD aims to fulfil the vision of user-centric secure and dependable clouds of clouds through a new security management architecture and infrastructure. It will support user-centric deployments across multi-clouds enabling composition of innovative trustworthy services, thus uplifting Europe innovation capacity and competitiveness.
Separating the WHEAT from the Chaff: An Empirical Design for Georeplicated State Machines
J. Sousa, A. Bessani, The International Symposium on Reliable Distributed Systems, Montreal, Canada, September 2015.
Abstract: State machine replication is a fundamental technique for implementing consistent fault-tolerant services. In the last years, several protocols have been proposed for improving the latency of this technique when the replicas are deployed in geographically-dispersed locations. In this work we evaluate some representative optimizations proposed in the literature by implementing them on an open-source state machine replication library and running the experiments in geographically-diverse PlanetLab nodes and Amazon EC2 regions. Interestingly, our results show that some optimizations widely used for improving the latency of geo-replicated state machines do not bring significant benefits, while others – not yet considered in this context – are very effective. Based on this evaluation, we propose WHEAT, a configurable crash and Byzantine fault-tolerant state machine replication library that uses the optimizations we observed as most effective in reducing SMR latency. WHEAT employs novel voting assignment schemes that, by using few additional spare replicas, enables the system to make progress without needing to access a majority of replicas. Our evaluation shows that a WHEAT system deployed in several Amazon EC2 regions presents a median latency up to 56% lower than a “normal” SMR protocol.
Access open research data.
The role of cloud services in malicious software: trends and insights?
X. Han, N. Kheir, D. Balzarotti, Paper in proceedings of DIMVA 2015, Milano, Italy, July 2015.
Abstract: In this paper we investigate the way cyber-criminals abuse public cloud services to host part of their malicious infrastructures, including exploit servers to distribute malware, C&C servers to manage infected terminals, redirectors to increase anonymity, and drop zones to host stolen data. We conduct a large scale analysis of all the malware samples submitted to the Anubis malware analysis system between 2008 and 2014. For each sample, we extracted and analyzed all malware interactions with Amazon EC2, a major public cloud service provider, in order to better understand the malicious activities that involve public cloud services. In our experiments, we distinguish between benign cloud services that are passively used by malware (such as file sharing, URL shortening, and pay-per-install services), and other dedicated machines that play a key role in the malware infrastructure. Our results reveal that cyber-criminals sustain long-lived operations through the use of public cloud resources, either as a redundant or a major component of their malware infrastructures. We also observe that the number of malicious and dedicated cloud-based domains has increased almost 4 times between 2010 and 2013. To understand the reasons behind this trend, we also present a detailed analysis using public DNS records. For instance, we observe that certain dedicated malicious domains hosted on the cloud remain active for an average of 110 days since they are first observed in the wild.
How many planet-wide leaders should there be?
S. Liu, M. Vukolic, Distributed Cloud Computing Workshop, Portland, USA, June 2015.
Abstract: Geo-replication becomes increasingly important for modern planetary scale distributed systems, yet it comes with a specific challenge: latency, bounded by the speed of light. In particular, clients of a geo-replicated system must communicate with a leader which must in turn communicate with other replicas: wrong selection of a leader may result in unnecessary round-trips across the globe. Classical protocols such as celebrated Paxos, have a single leader making them unsuitable for serving widely dispersed clients. To address this issue, several all-leader geo-replication protocols have been proposed recently, in which every replica acts as a leader. However, because these protocols require coordination among all replicas, commiting a client's request at some replica may incure the so-called "delayed commit" problem, which can introduce even a higher latency than a classical single-leader majority-based protocol such as Paxos. In this paper, we argue that the "right" choice of the number of leaders in a geo-replication protocol depends on a given replica configuration and propose Droopy, an optimization for state machine replication protocols that explores the space between single-leader and all-leader by dynamically reconfiguring the leader set. We implement Droopy on top of Clock-RSM, a state-of-the-art all-leader protocol. Our evaluation on Amazon EC2 shows that, under typical imbalanced workloads, Droopy-enabled Clock-RSM efficiently reduces latency compared to native Clock-RSM, whereas in other cases the latency is the same as that of the native Clock-RSM.
Nested Virtualization meets Micro-Hypervisors: Towards a Virtualization Architecture for User-Centric Multi-Clouds
A. Palesandro, M. Lacoste, C. Ghedira-Guegan, N. Bennani, SEC2 2015 (First ComPAS Workshop on Cloud Security), June 2015.
Abstract: After a cloud computing decade, the user-centric, fully interoperable, multi-provider cloud remains a mirage. In currently deployed architectures, "horizontal" multi-cloud interoperability limitations come on top of "vertical" multi-layer security concerns. In this paper, we argue that an architecture with a hybrid design could be a viable solution. Indeed, we present a new virtualization architecture combining micro-hypervisor (MH), nested virtualization (NV)and component-based hypervisor (CBH) paradigms. Leveraging NV interoperability and legacy support, the architecture provides to users a transparent federation of multiple-provider resources. We also adopt a MH including CBH-like modules as NV lower-layer hypervisor to achieve both a minimal TCB and to enable users to directly control hypervisor components managing their resources.
Software-Defined Networks: On the Road to the Softwarization of Networking
F. M. V. Ramos, D. Kreutz, P. Veríssimo, Cutter IT Journal, May 2015.
Abstract: Traditional IP networks are complex and hard to manage. The vertical integration of the infrastructure, with the control and data planes tightly coupled in network equipment, makes it a challenging task to build and maintain efficient networks in an era of cloud computing. Software-Defined Networking (SDN) breaks this coupling by segregating network control from routers and switches and by logically centralizing it in an external entity that resides in commodity servers. This way, SDN provides the flexibility required to dynamically program the network, promoting the “softwarization” of networking.
In this article we introduce this new paradigm and show how it breaks the status quo in networking. We present the most relevant building blocks of the infrastructure and discuss how SDN is leading to a horizontal industry based on programmable and open components. We pay particular attention to use cases that demonstrate how IT companies such as Google, Microsoft, and VMware are embracing SDN to operate efficient networks and offer innovative networking services.
On the Consistency of Heterogeneous Composite Objects
A. Bessani, R. Mendes, T. Oliveira, proceedings of the First Workshop on Principles and Practice of Consistency for Distributed Data, March 2015.
Abstract: Several recent cloud-backed storage systems advocates the composition of a number of cloud services for improving performance and fault tolerance (e.g., [1, 3, 4]). An interesting aspect of these compositions is that the consistency guarantees they provide depend on the consistency of such base services, which are normally different. In this short paper we discuss two ways in which these services can be composed and the implications in terms of the consistency of the composed object. Although these techniques were devised (or observed) when solving practical problems in dealing with the eventual consistency guarantees of current cloud storage services (e.g., Amazon S3 [6], Windows Azure Blob Storage [7]), we believe they might be of general interest, and deserve the attention of the community. In particular, we want to discuss some initial ideas about the theoretical underpinnings of object compositions in which base objects provide different consistency guarantees.