Author ORCID Identifier
Date of Award
Doctor of Philosophy (PhD)
Neural networks enable many exciting technologies and products that analyze and process our data. This data is often privacy-sensitive, and we grant companies access to it because we want to use their services. We have little to no control over what happens to our data after that. Privacy-preserving machine learning provides solutions that allow us to use these services while maintaining the privacy of our data. Homomorphic encryption (HE) is one of the techniques that powers privacy-preserving machine learning. It allows us to perform computation on encrypted data without revealing the input, intermediate, or final results. However, HE comes with several limitations and significant resource overhead.
The most important limitations are a reduced set of operations, HE only supports addition and multiplication, and a bound on the depth of the computation. The depth of computation is the number of consecutive operations needed to complete it. This means we cannot compute arbitrary neural networks over encrypted data since they often rely on unsupported operations or are too deep. Recurrent neural networks (RNN) suffer especially from depth limitation. Due to their structure, RNNs produce comparatively deep computation.
In this work, we focus on multiple challenges of neural networks on encrypted data. One of the main challenges common to most architectures is the resource overhead introduced by HE. Since encrypted data is often orders of magnitude larger than plain data memory becomes a significant bottleneck. We analyze the computation of neural network layers and develop a caching and swapping scheme that allows us to dynamically load and unload data from memory while sacrificing as little time as possible. We further study the challenges specific to RNNs, mainly the computational depth. We find that naïve approaches for RNN over encrypted data are too deep. To address this, we design and evaluate multiple architectures that feature recurrent components to capture their strength in sequences but have a lower depth. To evaluate these models, we design and implement an encrypted neural network runtime based on Tensorflow’s XLA (accelerated linear algebra) compiler that allows us to run neural networks over encrypted data with very few extra steps.
Podschwadt, Robert, "Privacy-Preserving Deep Learning with Homomorphic Encryption: Addressing Challenges Related to Usability, Memory, and Recurrent Neural Networks." Dissertation, Georgia State University, 2023.
File Upload Confirmation
Available for download on Thursday, May 30, 2024