|Split Without a Leak: Reducing Privacy Leakage in Split Learning
|Nguyen, K., Khan, T. and Michalas, A.
The popularity of Deep Learning (DL) makes the privacy of sensitive data more imperative than ever. As a result, various privacy-preserving techniques have been implemented to preserve user data privacy in DL. Among various privacy-preserving techniques, collaborative learning techniques, such as Split Learning (SL) have been utilized to accelerate the learning and prediction process. Initially, SL was considered a promising approach to data privacy. However, subsequent research has demonstrated that SL is susceptible to many types of attacks and, therefore, it cannot serve as a privacy-preserving technique. Meanwhile, countermeasures using a combination of SL and encryption have also been introduced to achieve privacy-preserving deep learning. In this work, we propose a hybrid approach using SL and Homomorphic Encryption (HE). The idea behind it is that the client encrypts the activation map (the output of the split layer between the client and the server) before sending it to the server. Hence, during both forward and backward propagation, the server cannot reconstruct the client’s input data from the intermediate activation map. This improvement is important as it reduces privacy leakage compared to other SL-based works, where the server can gain valuable information about the client’s input. In addition, on the MIT-BIH dataset, our proposed hybrid approach using SL and HE yields faster training time (about~6 times) and significantly reduced communication overhead (almost~160 times) compared to other HE-based approaches, thereby offering improved privacy protection for sensitive data in DL.
|19th EAI International Conference on Security and Privacy in Communication Networks (SecureComm’23)
|Accepted author manuscript
File Access Level
Open (open metadata and files)
|19 Oct 2023
|Web address (URL)