Author ORCID Identifier
Date of Award
Doctor of Philosophy in Business Administration (PhD)
Recent progress in deep learning has enabled applications in many areas, such as business, security, and science, that could impact our lives. Despite recent advances in these domains, deep neural network models have been shown to be vulnerable to adversarial attacks and lack of interpretability of their predictions. Therefore, it is crucial to investigate robust and interpretable deep learning models and algorithms to solve the above issues.
In this dissertation, we proposed a series of algorithms for delivering robust and interpretable deep learning methods. To begin with, we study the problem of how to defend against adversarial attacks with a purification-based algorithm called defense-VAE. Secondly, we proposed GDPA, a patch attack algorithm that can be readily used in adversarial training. With this algorithm, we can train deep learning models that are robust to patch attacks. Thirdly, we proposed an interpretation algorithm NICE, which learns sparse masks on input images. We also showed how to use this interpretation algorithm for semantic compression on images. Fourthly, we applied NICE on brain MRI data for the schizophrenia discrimination task, in which we detected the important regions of the brain for schizophrenia discrimination. Lastly, we proposed the PSP algorithm, which applied parameter-wise smooth policy in the PPO algorithm to improve the performance and robustness of reinforcement learning (RL) agents.
Li, Xiang, "Towards Robust and Interpretable Deep Learning." Dissertation, Georgia State University, 2022.
File Upload Confirmation