Author ORCID Identifier

0000-0002-9128-8454

Date of Award

12-11-2023

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Computer Science

First Advisor

Zhipeng Cai

Second Advisor

Wei Li

Third Advisor

Yingshu Li

Fourth Advisor

Yan Huang

Abstract

Nowadays, with the proliferation of multimedia and the coming of deep learning era, many multimedia data-oriented applications have been proposed to achieve face recognition, automatic retailing, automatic driving, intelligent medical healthcare, visual-audio speech recognition, and so on. However, these deep learning models may face a serious risk of data privacy leakage in the utilization process of these multimedia data. For example, malicious attackers can exploit deep learning techniques to deduce sensitive information from eavesdropped multimedia data, and these attackers can pilfer historical training data through a membership inference attack. Although some privacy-preserving deep learning approaches have been investigated, there are many limitations to be overcome. So far, it is still an open issue to design privacy-preserving deep learning mechanisms in different application scenarios to achieve individuals’ privacy protection while maintaining deep learning models’ performance.

In this dissertation, we investigate a series of mechanisms for multimedia data privacy protection in deep learning applications. Firstly, we propose an audio-visual autoencoding scheme to achieve visual privacy protection, visual quality preservation, and video transmission efficiency. Secondly, we propose a differential private deep learning model to realize the tradeoff between data privacy and the utility of multi-label image recognition (e.g., accuracy) by leveraging a differential privacy mechanism with a bounded global sensitivity and incorporation of regularization term into loss function. Thirdly, we propose a differential private correlated representation learning model to accomplish privacy-preserving multimodal sentiment analysis by combining a correlated representation learning scheme with a differential privacy protection scheme. In particular, a pre-determined correlation factor is employed to flexibly adjust the expected correlation among the correlated representations.

At last, we also propose future research topics to complete the whole dissertation. The first topic focuses on the multi-sensor data privacy protection while considering the certified performance of deep learning. The second topic studies model privacy protection to prevent side-channel attacks from inferring the architecture of deep neural networks.

DOI

https://doi.org/10.57709/36372075

File Upload Confirmation

1

Share

COinS