Secure Computing using Certified Software and Trusted Hardware Rohit Sinha Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2017-216 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-216.html December 14, 2017 Copyright © 2017, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Secure Computing using Certified Software and Trusted Hardware By Rohit Sinha A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Sanjit A. Seshia, Chair Professor David Wagner Professor George Necula Associate Professor Antonio Montalban Fall 2017 The dissertation of Rohit Sinha, titled Secure Computing using Certified Software and Trusted Hardware, is approved: Chair Date Date Date Date University of California, Berkeley Secure Computing using Certified Software and Trusted Hardware Copyright 2017 by Rohit Sinha 1 Abstract Secure Computing using Certified Software and Trusted Hardware by Rohit Sinha Doctor of Philosophy in Engineering – Electrical Engineering and Computer Sciences University of California, Berkeley Professor Sanjit A. Seshia, Chair Building applications that ensure confidentiality of sensitive data is a non-trivial task. Such applications constantly face threats due to vulnerabilities in the application’s code, or infras- tructure attacks due to malicious datacenter insiders and exploits in the lower computing layers (i.e. OS, hypervisor, BIOS, firmware) that the application relies upon. This dissertation presents a novel approach for developing and verifying applications with provable confidentiality guarantees, even in the presence of such privileged adversaries. Our primary defense against infrastructure attacks is the use of trusted primitives such as Intel SGX enclaves, for isolating sensitive code and data within protected memory regions; en- claves are inaccessible to all other software running on the machine (i.e. OS, hypervisor, etc.), thus removing these large software layers from the trusted computing base (TCB). A central question addressed by this thesis is how the trusted hardware primitives can be used safely to build the trusted components of modern applications with provable guarantees. Prior experience suggests that even expert developers write unsafe programs that leak sensi- tive data due to programming errors and side channel attacks. To address this problem, this thesis makes contributions in formal threat models, modeling and specification of trusted platforms, and techniques to verify confidentiality properties of enclave programs. First, thisthesisformalizesadversarymodels, anabstract, interface-levelmodeloftrusted platforms (including Intel SGX and MIT Sanctum), and formal semantics of enclave exe- cution. This formal framework is required for reasoning about a program’s behavior in the presence of a privileged adversary. Next, this thesis presents tools and techniques for certi- fying confidentiality (at the binary level), a property that we decompose into the following desiderata: 1) lack of explicit leak of secrets via enclave’s outputs, 2) protection against certain side-channel leaks — we only remove leaks via page-level memory access pattern, which is a new channel for privileged adversaries. For both desiderata, we develop verifica- tion tools and evaluate them on application’s binaries including Map-Reduce programs from the Microsoft VC3 system, SPEC benchmarks, and several machine learning algorithms. i Dedicated to Mummy and Papa ii Contents List of Figures vi List of Tables ix Acknowledgments x 1 Introduction 1 1.1 Thesis Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Modeling and Verification of Enclave Programs, Platforms, and Ad- versaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Design and Verification Techniques for Enclave Programs . . . . . . . 4 1.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Background 9 2.1 Attacks from a Privileged Software Adversary . . . . . . . . . . . . . . . . . 9 2.2 Enclaves using Trusted Hardware Primitives . . . . . . . . . . . . . . . . . . 10 2.2.1 Intel SGX Enclaves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2 RISC-V Sanctum Enclaves . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Sample Applications of Enclaves . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.1 One Time Password Service . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 VC3: Trustworthy Data Analytics using SGX . . . . . . . . . . . . . 16 2.4 Challenges of Trusted Computing using Enclaves . . . . . . . . . . . . . . . 17 I Trusted Platforms: Modeling and Verification 21 3 Formal Semantics of Enclave Execution 23 3.1 Enclave Program Representation . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Enclave’s State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.3 Enclave’s Inputs and Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4 Syntax of Enclave Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Contents iii 3.5 Semantics of Enclave Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Model of Execution within Enclaves . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4 Formal Modeling of Trusted Platforms and Privileged Adversaries 32 4.1 The Trusted Abstract Platform . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1.1 TAP State Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1.2 TAP Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.1.3 Enclave State, Inputs, and Outputs and the Adversary’s State . . . . 38 4.2 Formal Model of a Privileged Adversary . . . . . . . . . . . . . . . . . . . . 39 4.2.1 Operations of a TAP Adversary . . . . . . . . . . . . . . . . . . . . . 40 4.2.2 Observations of a TAP Adversary . . . . . . . . . . . . . . . . . . . . 41 4.3 Refinements of the TAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.3.1 Refinement Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.4 Refinement of the TAP: Intel SGX . . . . . . . . . . . . . . . . . . . . . . . 43 4.4.1 SGX Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.4.2 SGX Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.4.3 SGX Model Refines TAP . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.5 Refinement of the TAP: Sanctum Processor . . . . . . . . . . . . . . . . . . 46 4.5.1 Sanctum Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.5.2 Sanctum Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.5.3 Sanctum Model Refines TAP . . . . . . . . . . . . . . . . . . . . . . 47 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5 Formal Verification of Secure Remote Execution on Enclave Platforms 49 5.1 Secure Remote Execution of Enclaves . . . . . . . . . . . . . . . . . . . . . . 50 5.2 Proof Decomposition of SRE . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2.1 Secure Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 5.2.2 Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 5.2.3 Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.3 Soundness of SRE Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 54 5.4 Application of Secure Remote Execution . . . . . . . . . . . . . . . . . . . . 54 5.5 Proof of Secure Remote Execution for TAP . . . . . . . . . . . . . . . . . . . 55 5.6 Verification Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.6.1 BoogiePL Model Construction . . . . . . . . . . . . . . . . . . . . . . 56 5.6.2 Verification Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 II Secure Enclaves: Design and Verification 59 6 Formalizing Confidentiality 61 6.1 Modeling Adversary’s Effect on Enclave Execution . . . . . . . . . . . . . . . 61 Contents iv 6.2 Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.3 Page Access Obliviousness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 7 Moat: Verifying Confidentiality of Enclave’s Outputs 66 7.1 Overview of Moat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 7.1.1 Declassification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 7.1.2 Assumptions and Limitations . . . . . . . . . . . . . . . . . . . . . . 73 7.2 Proving Confidentiality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 7.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.3.1 Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.3.2 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 8 /Confidential : Scalably Verifying Confidentiality of Enclave’s Outputs 84 8.1 Overview of /Confidential . . . . . . . . . . . . . . . . . . . . . . . . . . 86 8.1.1 Verifying confidentiality. . . . . . . . . . . . . . . . . . . . . . . . . . 87 8.1.2 Restricted interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 8.1.3 Checking IRC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 8.2 Decomposing Proof of Confidentiality . . . . . . . . . . . . . . . . . . . . . . 91 8.2.1 WCFI-RW Property of U . . . . . . . . . . . . . . . . . . . . . . . . 92 M 8.2.2 Correctness of L’s API Implementation . . . . . . . . . . . . . . . . . 95 8.2.3 Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 8.3 Verifying WCFI-RW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 8.3.1 Runtime Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 8.3.2 Static Verifier for WCFI-RW . . . . . . . . . . . . . . . . . . . . . . . 99 8.3.3 Optimization to the Proof Obligations . . . . . . . . . . . . . . . . . 103 8.3.4 Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 8.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 8.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 8.6 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 8.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 9 Ensuring Page Access Obliviousness 111 9.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 9.1.1 Threat Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 9.1.2 Challenges in Guaranteeing Page Access Obliviousness . . . . . . . . 114 9.1.3 Compilation for Page Access Obliviousness . . . . . . . . . . . . . . . 115 9.2 PAO-Enforcing Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 9.2.1 Obliviating Data Accesses . . . . . . . . . . . . . . . . . . . . . . . . 118 9.2.2 Stochastic Optimization of Dummy Accesses . . . . . . . . . . . . . . 121
Description: