8th Deep Learning Security
and Privacy Workshop
co-located with the 46th IEEE Symposium on Security and Privacy
May 15, 2025
Photo: Pixabay

Research Opportunities in AI Security
David Wagner, University of California at Berkeley

Abstract: Generative AI is opening new opportunities for making computing more accessible and more efficient.  How can the security community get ahead of the risks?  I will describe some security challenges where there might be opportunities for the research community to make contributions: prompt injection, system-message following, secure agents and workflows, defense against poisoning, using LLMs for security, and more.

Bio: David Wagner is Professor of Computer Science at the University of California at Berkeley, working in the area of computer security.  He has published over 100 peer-reviewed papers in the scientific literature.  His research has analyzed and contributed to the security of cellular networks, 802.11 wireless networks, electronic voting systems, and other widely deployed systems.  He currently serves as a member of the Election Assistance Commission's Technical Guidance Development Committee, the federal advisory board charged with helping to draft future voting standards.

Rethinking AI Security from the System Point of View
Yuan Tian, University of California, Los Angeles

Abstract: Artificial-intelligence applications are no longer confined to discrete models running in isolation. With the advancement of GenAI, the AI applications are end-to-end systems that ingest live data, invoke complex toolchains, and operate inside intertwined virtual and physical environments. Yet most security guidance still targets individual components, leaving systemic blind spots that adversaries eagerly exploit. This talk focuses on AI system security beyond model-centric hardening. In particular, I will discuss two main points: secure the entire pipeline of AI supply chain, and secure AI’s interactions with cyber and physical environments.

Bio: Yuan Tian is an Associate Professor of Electrical and Computer Engineering, Computer Science, and the Institute for Technology, Law and Policy (ITLP) at the University of California, Los Angeles. Her research focuses on security and privacy, with an emphasis on their interactions with computer systems, machine learning, and human-computer interaction. Her research has real-world impact: her countermeasures and design changes have been integrated into widely used platforms (such as Android, Chrome, and Azure), and her findings have informed security recommendations by standard organizations like the Internet Engineering Task Force (IETF).

Modeling and Security Analysis of Attacks on Machine Learning Systems
Anoop Singhal, National Institute of Standards and Technology

Abstract: Last several years have witnessed rapidly increasing use of machine learning (ML) systems in multiple industry sectors (e.g., auto driving cars, text translation, and speech understanding). Evasion attacks and data poisoning attacks can make ML systems misbehave. Evasion attacks refer to crafting adversarial examples after the training phase, so that models produce incorrect outputs. Data poisoning attacks refer to modifying the training data, so that the trained model will be maliciously altered. However, it is widely recognized that the existing security analysis frameworks and techniques, which were developed to analyze enterprise (software) systems and networks, are not very suitable for analyzing ML systems. ML systems have new kinds of causality relationships which cannot be handled by current approaches for security analysis.  In this presentation, we will present new techniques for modeling of Attacks on ML systems using Causality Graphs. These graphs are used to capture the data, model, and library dependencies in a specific ML system. We will present some of the challenges for designing mitigation strategies against attacks on ML Systems.

Bio: Dr. Anoop Singhal, is currently a Senior Computer Scientist and a Program Manager in the Computer Security Division at the National Institute of Standards and Technology (NIST) in Gaithersburg, MD. He has several years of research experience at NIST, George Mason University and AT&T Bell Labs. He received his Ph.D. in Computer Science from Ohio State University, Columbus, Ohio. His research interests are in system security, active cyber defense, data analytics for cyber security, cloud computing security and machine learning systems. He is a member of ACM, senior member of the IEEE and he has co-authored over 50 technical papers in leading conferences and journals. He has taught several graduate level courses in Computer Science as an adjunct faculty and given talks at RSA, IEEE and ACM conferences. He has two patents in the area of attack graphs and he has also co-edited a book on Secure Cloud Computing and Network Security Metrics.

Program (Tentative) - May 15, 2025

The following times are on PT time zone.
7:30–09:00 Morning Coffee
8:45–09:00 Opening and Welcome
09:00–9:45 Keynote I: David Wagner (UC Berkeley)
Research Opportunities in AI Security
9:45–10:15 Morning Coffee/Break + Poster Session
(Extended Abstract) GRACE: Secure Graph Convolutional Network on Vertically Split Data from Sparse Matrix Decomposition
Yu Zheng (University of California, Irvine); Qizhi Zhang (ByteDance); Lichun Li (Ant Group); Kai Zhou (The Hong Kong Polytechnic University); Shan Yin (Ant Group)
(Extended Abstract) Adversarial ML Problems Are Getting Harder to Solve and to Evaluate
Javier Rando (ETH Zurich), Jie Zhang (ETH Zurich); Nicholas Carlini (Google DeepMind); Florian Tramèr (ETH Zurich)
(Extended Abstract) OPA: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
Harish Karthikeyan (JPMorgan AI Research), Antigoni Polychroniadou (JPMorgan AI Research)
Best Extended Abstract Award
(Extended Abstract) HashMark: Watermarking Tabular/Synthetic Data for Machine Learning via Cryptographic Hash Functions
Harish Karthikeyan (JPMorgan AI Research), Leo DeCastro (JPMorgan AI Research), Antigoni Polychroniadou (JPMorgan AI Research), Manuela Veloso (JPMorgan AI Research); Tucker Balch (Emory University)
(Extended Abstract) Priv-PFL: A Privacy-Preserving and Efficient Personalized Federated Learning Approach
Alireza Aghabagherloo, Roozbeh Sarenche (COSIC, Department of Electrical Engineering, KU Leuven, Belgium); Maryam Zarezadeh (Barkhausen Institut, Dresden, Germany); Bart Preneel (COSIC, Department of Electrical Engineering, KU Leuven, Belgium); Stefan Köpsell (Barkhausen Institut, Dresden, Germany)
(Extended Abstract) ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Chhavi Yadav (University of California, San Diego); Evan Monroe Laufer, Dan Boneh (Stanford University); Kamalika Chaudhuri (University of California, San Diego)
(Full-length Papers) All full-length papers
10:15–11:00 Keynote II: Yuan Tian (UCLA)
Rethinking AI Security from the System Point of View
11:00–11:45 Keynote III: Anoop Singhal (NIST)
Modeling and Security Analysis of Attacks on Machine Learning Systems
11:45–13:15 Lunch
13:30-14:00 Session I: GenAI Security (Session Chair: Binghui Zhang)
13:30: Membership Inference Attacks on Sequence Models
Lorenzo Rossi (CISPA Helmholtz Center for Information Security); Michael Aerni (ETH Zurich), Jie Zhang (ETH Zurich), Florian Tramèr (ETH Zurich)
Best Paper Award
13:40: Enhancing Jailbreak Resistance in Large Language Models Using Model Merge
Saki Hiromi (NTT), Hiroki Kinoshita (NTT), Masanori Yamada (NTT), Takayuki Miura (NTT)
13:50: Blind Baselines Beat Membership Inference Attacks for Foundation Models
Debeshee Das (ETH Zurich), Jie Zhang (ETH Zurich), Florian Tramèr (ETH Zurich)
14:00–15:00 Afternoon Coffee/Break + Poster
Extended abstract and full-length papers
15:00-15:40 Session II: Machine Learning for Security (Session Chair: Yanxue Jia)
15:00: LAMD: Context-driven Android Malware Detection and Classification with LLMs
Xingzhi Qian (University College London), Xinran Zheng (University College London), Yiling He (University College London); Shuo Yang (University of Hong Kong); Lorenzo Cavallaro (University College London)
Best Paper Award
15:10: Using LLMs to Identify Personal Data Processing in Source Code
Immanuel Kunz (Fraunhofer AISEC), Ching-Yu Kao (Fraunhofer AISEC), Daniel Kowatsch (Fraunhofer AISEC); Jens Hiller (Google Germany), Julian Schütte (Google Germany); Dmitry Prokhorenkov (Technical University Munich); Konstantin Böttinger (Fraunhofer AISEC)
15:20: Which Instructions Matter the Most: A Saliency Analysis of Binary Function Embedding Models
Moritz Dannehl (LMU Munich), Samuel Valenzuela (LMU Munich), Johannes Kinder (LMU Munich)
15:30: Inference Attacks for X-Vector Speaker Anonymization
Luke A. Bauer (University of Florida), Wenxuan Bao (University of Florida), Malvika Jadhav (University of Florida), Vincent Bindschaedler (University of Florida)
15:40-15:50 Short Break
15:50-16:20 Session III: Security for Machine Learning (Session Chair: Guangyu Shen)
15:50: FedTilt: Towards Multi-Level Fairness-Preserving and Robust Federated Learning
Binghui Zhang (Illinois Institute of Technology), Luis Mares De La Cruz (Illinois Institute of Technology), Binghui Wang (Illinois Institute of Technology)
16:00: Composite Explanation-Aware Attacks
Maximilian Noppel (Karlsruhe Institute of Technology), Christian Wressnegger (Karlsruhe Institute of Technology)
16:10: Impact of Data Duplication on Deep Neural Network-Based Image Classifiers: Robust vs. Standard Models
Alireza Aghabagherloo (COSIC, Department of Electrical Engineering, KU Leuven); Aydin Abadi (Newcastle University); Sumanta Sarkar (University of Essex); Vishnu Asutosh Dasu (Pennsylvania State University); Bart Preneel (COSIC, Department of Electrical Engineering, KU Leuven)
16:20-16:25 Closing Remarks
16:25-17:00 Social Hour

Call for Papers

Important Dates

  • Paper submission deadline (extended): Feb 14, 2025, 11:59 PM (AoE, UTC-12) Feb 9, 2024
  • Acceptance notification: Mar 14, 2025
  • Camera-ready due: April 18, 2025
  • Workshop: May 15, 2024

Overview

Deep learning and security have made remarkable progress in the last years. On the one hand, neural networks have been recognized as a powerful tool for security in academia and industry. On the other hand, security and privacy of deep learning has gained growing attention since deep learning has become a new attack surface. The security, privacy, fairness, and interpretability of neural networks has been called into question.

This workshop strives for bringing these two complementary views together by (a) exploring deep learning as a tool for security as well as (b) investigating the security and privacy of deep learning.

Topics of Interest

DLSP seeks contributions on all aspects of deep learning and security. Topics of interest include (but are not limited to):

Deep Learning

  • Security and privacy aspects of diverse machine learning systems, including but not limited to federated learning, recommender systems, reinforcement learning, generative AI (agent, RAG, and so on), graph neural networks, foundation models
  • Jailbreak, prompt injection attacks, and their defenses
  • Poisoning and backdoor attacks, adversarial examples, privacy attacks, and their defenses
  • Other topics such as toxicity, ethics, fairness, deepfake, copyright, machine unlearning, explainability, watermarking, AIGC detection, and so on

Computer Security

  • Computer forensics
  • Spam detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Malware identification, analysis, and similarity
  • Data anonymization/ de-anonymization
  • Security in social networks
  • Vulnerability discovery

Submission Guidelines

We accept two types of submissions:

  • Track 1: Archival, full-length papers. Submissions in this track can be up to six pages (submissions less than four pages are encouraged to be sumitted to Track 2; submissions with four, five, and six pages will be treated equally regardless of length), plus additional references and appendices. Accepted papers in this track will be included in the IEEE workshop proceedings.
  • Track 2: Non-Archival, extended abstract. For this track, we encourage submissions that are forward-looking and explore visionary ideas. We allow concurrent submissions for this track, but the authors are responsible for ensuring compliance with the policies of other venues. Submissions in this track can be up to three pages, plus additional references and appendices. Accepted papers in this track will NOT be included in the IEEE workshop proceedings, but will be publicly available on this workshop website.

Submissions in both tracks should be unpublished work. Papers must be formatted for US letter (not A4) size paper. The text must be formatted in a two-column layout, with columns no more than 9.5 in. tall and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are strongly recommended to use the latest IEEE S&P Conference proceedings templates. Failure to adhere to the page limit and formatting requirements are grounds for rejection without review. Submissions must be in English and properly anonymized.

For any questions, contact the workshop organizers at dlsp2025@ieee-security.org

Best Paper/Extended Abstract Awards

One archival, full-length paper in Track 1 will be selected as the Best Paper Award; and one extended abstract in Track 2 will be selected as the Best Extended Abstract Award.

Presentation Form

All accepted submissions will be presented at the workshop. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.

One author of each accepted paper is required to attend the workshop and present the paper.

Submission Site

https://hotcrp.dlsp2025.ieee-security.org/

Committee

Program Chairs

Steering Committee

Program Committee

  • Álvaro A. Cárdenas, University of California, Santa Cruz
  • Jihye Choi, University of Wisconsin-Madison
  • Shang-Tse Chen, National Taiwan University
  • Minghong Fang, University of Louisville
  • Peng Gao, Virginia Tech
  • Wenbo Guo, UCSB
  • Jingxuan He, UC Berkeley
  • Xinlei He, Hong Kong University of Science and Technology (Guangzhou)
  • Yuan Hong, University of Connecticut
  • Sanghyun Hong, Oregon State University
  • Yufei Han, INRIA
  • Hyungjoon Koo, Sungkyunkwan University
  • Pavel Laskov, University of Liechtenstein
  • Chengzhi Mao, Google
  • Luyao Niu, University of Washington
  • Erwin Quiring, Ruhr University Bochum
  • Kevin Roundy, Andromeda Security
  • Dongdong She, HKUST
  • Guangyu Shen, Purdue University
  • Fnu Suya, University of Tennessee, Knoxville
  • Guanhong Tao, University of Utah
  • Sai Deep Tetali, Meta Platforms Inc
  • Binghui Wang, IIT
  • Tianhao Wang, University of Virginia
  • Fan Wu, University of Illinois Urbana Champaign
  • Emily Wenger, Duke University
  • Zhuo Zhang, Purdue University