Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a powerful tool for security in academia and industry. On the other hand, security and privacy of deep learning has gained growing attention since deep learning has become a new attack surface. The security, privacy, fairness, and interpretability of neural networks has been called into question.
This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security and privacy of deep learning.
DLSP seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):
Deep Learning
Computer Security
We accept two types of submissions:
Submissions in both tracks should be unpublished work. Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE S&P Conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.
For any questions, contact the workshop organizers at
All accepted submissions will be presented at the workshop. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper.