Author ORCID Identifier
Date of Award
Doctor of Philosophy (PhD)
In this dissertation, we investigate the privacy protection schemes for the visual data against deep-learning-based computer vision models. The image classification models based on neural networks recently have outperformed most of the traditional models. However, as a side-effect of the escalation of efficiency, the malicious utilization of artificial intelligence models facilitates the leak of sensitive information from private data. In this dissertation, we propose a series of mechanisms to protect the privacy and sensitive information contained in visual data. The privacy protection algorithms perform information encryption on raw image data with a trivial sacrifice in image quality to human observers. To begin with, we propose an information encryption model for general image data and state-of-the-art image classification models. The proposed novel privacy protection model, Pivot Pixel Noise Generator (PPNG) with Particle Swarm Optimization (PSO), generates noises on a small portion of locations on the original image to provide privacy protection by preventing the target machine-learning-powered computer vision models from interpreting the data into the true label category. Furthermore, we propose a privacy protection model for more specific use for face image data. To protect the privacy of identity information in image data, we propose a Sensitivity Map Noise-Adding (SMNA) model based on generative adversarial networks (GAN) to provide privacy protection on face photos against malicious use of the face recognition models. We finally propose an identity privacy protection model for face data, FaceAdvGAN, with better effectiveness and higher efficiency. We utilize a dataset of adversarial examples to train a more advanced model for industry use. The generator of FaceAdvGAN learns how to transform a data sample from the distribution of real face images to its corresponding point in the distribution of adversarial examples. The FaceAdvGAN can generate noises only based on the knowledge of the image to be protected. The generated noises have similar effectiveness with the normal adversarial examples and thus can provide effective and efficient protection against state-of-the-art face recognition models.
Yang, Jishen, "Privacy Protection For Visual Data Against Deep Learning Based Computer Vision Models." Dissertation, Georgia State University, 2021.
File Upload Confirmation
Available for download on Saturday, December 03, 2022