ARTMAN 2024: 2nd Workshop on Recent Advances in Resilient and Trustworthy MAchine learning-driveN systems Waikiki, HI, United States, December 9, 2024 |
Conference website | https://artman-workshop.gitlab.io/ |
Submission link | https://easychair.org/conferences/?conf=artman2024 |
Submission deadline | September 15, 2024 |
This workshop aims at bringing together academic researchers and industrial practitioners fromdifferent domains with diverse expertise (mainly security & privacy and machine learning, but also from applicationdomains) to collectively explore and discuss the topics about resilient and trustworthy machine learning-poweredapplications and systems, share their views, experiences, and lessons learned, and provide their insights and perspec-tives, so as to converge on a systematic approach to securing them. One of the ultimate objectives, which deservesa series of multiple workshops to achieve, is to foster the close collaboration between researchers and practitionersto improve the security, privacy, and trust of ML applications in a series of heterogeneous and complex systems,such as cyber-physical systems and intelligent manufacturing systems. On the one hand, it is important for aca-demic researchers to practically specify threat models in terms of attacker intent, objectives, skills (knowledge,capabilities), and strategies (by taking into account cost factors). For example, an attacker may employ a simpleyet effective data poisoning method instead of gradient computations to evade ML-based anomaly detection sys-tems. On the other hand, the practitioners should be strongly encouraged to share their observations and insightsduring the development and deployment of production-grade AI systems (generally called intelligent systems), mostof which are invisible or closed. This can help academics understand how real-life AI systems normally work andset up more realistic assumptions to develop ML security research and address real-world concerns. The resultsand impacts of this workshop are expected to go beyond the research community, hopefully providing valuablefindings and recommendations to the telecommunications stakeholders, standards-developing organizations, andgovernment sectors.Without enforcing strong limitations on the use cases in which AI/ML systems may be deployed, we encouragecontributions and discussions both foundational to ML systems and applied, with specific interest in self-drivennetworks, digital twins, large language models, and healthcare AI. This workshop is also interested in solicitingcontributions on applying AI/ML algorithms, especially those knowledge-informed ones, to improve resilience andtrust in such scenarios.
Submission Guidelines
- Submissions should be 6-10 pages excluding references and appendices, using double-column IEEE template available here with
\documentclass[conference,compsoc]{IEEEtran}
. 5 additional pages can be used for references and well-referenced appendices. Note that the reviewers are not expected to read these appendices. - All submissions must be anonymous, i.e., author names and affiliations should not be included. Authors can cite their work but must do so in the third person.
- Accepted workshop papers will be published by IEEE Computer Society Conference Publishing Services (CPS), see below.
Submission Link
List of Topics
- Threat modeling and risk assessment of ML systems and applications in intelligent systems, including, botnot limited to, anomaly detection, failure prediction, root cause analysis, incident diagnosis
- Data-centric attacks and defenses of ML systems and applications in intelligent systems, such as model evasion via targeted perturbations in testing samples, data poisoning in training examples
- Adversarial machine learning, including adversarial examples of input data and adversarial learning algorithms developed for intelligent systems
- ML robustness: testing, simulation, verification, validation, and certification of the robustness of ML pipelines (not only ML algorithms and models) in intelligent systems, including but not limited to data-centric analytics, model-driven methods, and hybrid methods
- AI system safety: dependability topics related to AI system development and deployment environments, including hardware, ML platform and framework, software
Committees
Program Committee
- Muhamad Erza Aminanto, Monash University, Indonesia
- Laurent Bobelin, INSA Centre Val de Loire, France
- Sajjad Dadkhah, University of New Brunswick, Canada
- Doudou Fall, Ecole Supérieure Polytechnique, Cheikh Anta Diop University, Senegal
- Joaquin Garcia-Alfaro, Telecom SudParis, Institut Polytechnique de Paris, France
- Pierre-François Gimenez, CentraleSupélec, France
- Yufei Han, Inria, France
- Frédéric Majorczyk, DGA, France
- Ikuya Morikawa, Fujitsu, Japan
- Antonio Muñoz, University of Malaga, Spain
- Mehran Alidoost Nia, Shahid Beheshti University, Iran
- Misbah Razzaq, INRAE, France
- Balachandra Shanabhag, Cohesity, USA
- Toshiki Shibahara, NTT, Japan
- Pierre-Martin Tardif, Université de Sherbrooke, Canada
- Fredrik Warg, RISE Research Institutes of Sweden
- Akira Yamada, Kobe University, Japan
Organizing committee
- Gregory Blanc, Telecom SudParis, Institut Polytechnique de Paris, France
- Takeshi Takahashi, National Institute of Information and Communications Technology, Japan
- Zonghua Zhang, CRSC R&D Institute Group Co. Ltd., China
Publication
Accepted papers will be published by IEEE Computer Society Conference Publishing Services (CPS) and will appear in the Computer Society Digital Library and IEEE Xplore® in an ACSAC Workshops 2024 volume alongside the main ACSAC 2024 proceedings. ACSAC is currently transitioning to technical sponsorship by IEEE Computer Society's Technical Community on Security and Privacy (TCSP) and expect approval before the proceedings are compiled.
Contact
Further details about the workshop can be found either on the workshop website (https://artman-workshop.gitlab.io/) or by contacting the organizers at their professionnal address.
Sponsors
This workshop is co-located with the ACSAC 2024 conference and is partially supported by the GRIFIN project (ANR-20-CE39-0011).