7:30–09:00 | Morning Coffee |
8:45–09:00 | Opening and Welcome |
09:00–9:45 | Keynote I: David Wagner (UC Berkeley) |
Research Opportunities in AI Security |
|
9:45–10:15 | Morning Coffee/Break + Poster Session |
(Extended Abstract) GRACE: Secure Graph Convolutional Network on Vertically Split Data from Sparse Matrix Decomposition
Yu Zheng (University of California, Irvine); Qizhi Zhang (ByteDance); Lichun Li (Ant Group); Kai Zhou (The Hong Kong Polytechnic University); Shan Yin (Ant Group) | |
(Extended Abstract) Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
Javier Rando (ETH Zurich), Jie Zhang (ETH Zurich); Nicholas Carlini (Google DeepMind); Florian Tramèr (ETH Zurich) | |
(Extended Abstract) OPA: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
Harish Karthikeyan (JPMorgan AI Research), Antigoni Polychroniadou (JPMorgan AI Research) Best Extended Abstract Award | |
(Extended Abstract) HashMark: Watermarking Tabular/Synthetic Data for Machine Learning via Cryptographic Hash Functions
Harish Karthikeyan (JPMorgan AI Research), Leo DeCastro (JPMorgan AI Research), Antigoni Polychroniadou (JPMorgan AI Research), Manuela Veloso (JPMorgan AI Research); Tucker Balch (Emory University) | |
(Extended Abstract) Priv-PFL: A Privacy-Preserving and Efficient Personalized Federated Learning Approach
Alireza Aghabagherloo, Roozbeh Sarenche (COSIC, Department of Electrical Engineering, KU Leuven, Belgium); Maryam Zarezadeh (Barkhausen Institut, Dresden, Germany); Bart Preneel (COSIC, Department of Electrical Engineering, KU Leuven, Belgium); Stefan Köpsell (Barkhausen Institut, Dresden, Germany) | |
(Extended Abstract) ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Chhavi Yadav (University of California, San Diego); Evan Monroe Laufer, Dan Boneh (Stanford University); Kamalika Chaudhuri (University of California, San Diego) | |
(Full-length Papers) All full-length papers | |
10:15–11:00 | Keynote II: Yuan Tian (UCLA) |
Rethinking AI Security from the System Point of View |
|
11:00–11:45 | Keynote III: Anoop Singhal (NIST) |
Modeling and Security Analysis of Attacks on Machine Learning Systems |
|
11:45–13:15 | Lunch |
13:30-14:00 | Session I: GenAI Security (Session Chair: Binghui Zhang) |
13:30: Membership Inference Attacks on Sequence Models
Lorenzo Rossi (CISPA Helmholtz Center for Information Security); Michael Aerni (ETH Zurich), Jie Zhang (ETH Zurich), Florian Tramèr (ETH Zurich) Best Paper Award | |
13:40: Enhancing Jailbreak Resistance in Large Language Models Using Model Merge
Saki Hiromi (NTT), Hiroki Kinoshita (NTT), Masanori Yamada (NTT), Takayuki Miura (NTT) | |
13:50: Blind Baselines Beat Membership Inference Attacks for Foundation Models
Debeshee Das (ETH Zurich), Jie Zhang (ETH Zurich), Florian Tramèr (ETH Zurich) | |
14:00–15:00 | Afternoon Coffee/Break + Poster |
Extended abstract and full-length papers | |
15:00-15:40 | Session II: Machine Learning for Security (Session Chair: Yanxue Jia) |
15:00: LAMD: Context-driven Android Malware Detection and Classification with LLMs
Xingzhi Qian (University College London), Xinran Zheng (University College London), Yiling He (University College London); Shuo Yang (University of Hong Kong); Lorenzo Cavallaro (University College London) Best Paper Award | |
15:10: Using LLMs to Identify Personal Data Processing in Source Code
Immanuel Kunz (Fraunhofer AISEC), Ching-Yu Kao (Fraunhofer AISEC), Daniel Kowatsch (Fraunhofer AISEC); Jens Hiller (Google Germany), Julian Schütte (Google Germany); Dmitry Prokhorenkov (Technical University Munich); Konstantin Böttinger (Fraunhofer AISEC) | |
15:20: Which Instructions Matter the Most: A Saliency Analysis of Binary Function Embedding Models
Moritz Dannehl (LMU Munich), Samuel Valenzuela (LMU Munich), Johannes Kinder (LMU Munich) | |
15:30: Inference Attacks for X-Vector Speaker Anonymization
Luke A. Bauer (University of Florida), Wenxuan Bao (University of Florida), Malvika Jadhav (University of Florida), Vincent Bindschaedler (University of Florida) | |
15:40-15:50 | Short Break |
15:50-16:20 | Session III: Security for Machine Learning (Session Chair: Guangyu Shen) |
15:50: FedTilt: Towards Multi-Level Fairness-Preserving and Robust Federated Learning
Binghui Zhang (Illinois Institute of Technology), Luis Mares De La Cruz (Illinois Institute of Technology), Binghui Wang (Illinois Institute of Technology) | |
16:00: Composite Explanation-Aware Attacks
Maximilian Noppel (Karlsruhe Institute of Technology), Christian Wressnegger (Karlsruhe Institute of Technology) | |
16:10: Impact of Data Duplication on Deep Neural Network-Based Image Classifiers: Robust vs. Standard Models
Alireza Aghabagherloo (COSIC, Department of Electrical Engineering, KU Leuven); Aydin Abadi (Newcastle University); Sumanta Sarkar (University of Essex); Vishnu Asutosh Dasu (Pennsylvania State University); Bart Preneel (COSIC, Department of Electrical Engineering, KU Leuven) | |
16:20-16:25 | Closing Remarks |
16:25-17:00 | Social Hour |
Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a powerful tool for security in academia and industry. On the other hand, security and privacy of deep learning has gained growing attention since deep learning has become a new attack surface. The security, privacy, fairness, and interpretability of neural networks has been called into question.
This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security and privacy of deep learning.
DLSP seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):
Deep Learning
Computer Security
We accept two types of submissions:
Submissions in both tracks should be unpublished work. Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE S&P Conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.
For any questions, contact the workshop organizers at dlsp2025@ieee-security.org
All accepted submissions will be presented at the workshop. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
One author of each accepted paper is required to attend the workshop and present the paper.