AISafety 2024: The IJCAI Workshop on Artificial Intelligence Safety 2024 Jungmun Resort Complex Jeju, South Korea, August 3-5, 2024 |
Conference website | https://www.aisafetyw.org/ |
Submission link | https://easychair.org/conferences/?conf=aisafety2024 |
Submission deadline | May 2, 2024 |
Scope
In recent years, the concerns on the risks of Artificial Intelligence (AI) are still growing. Safety is becoming increasingly relevant as humans are progressively ruled out from the decisions and control loops of intelligent systems. Indeed, the underlying AI techniques and algorithms, such as Generative AI (GenAI), Large Language Models (LLMs) and Machine Learning (ML), may compromise human values with harmful or untruthful responses. The technical foundations and assumptions on which traditional safety engineering principles are based, are inadequate for systems in which referred AI algorithms are interacting with the physical world and humans, at increasingly higher levels of autonomy. We must also consider the connection between the safety challenges posed by present-day AI systems, and more forward-looking research focused on more capable AI systems, up to and including Artificial General Intelligence (AGI).
This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:
- How can we engineer trustable AI software architectures?
- Do we need to specify and use bounded morality in system engineering to make AI-based systems more ethically aligned?
- What is the status of existing approaches in ensuring AI and ML safety and what are the gaps?
- What safety engineering considerations are required to develop safe human-machine interaction in automated decision-making systems?
- What AI safety considerations and experiences are relevant from industry?
- How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
- How can we develop solid technical visions and paradigm shift articles about AI Safety?
- How do metrics of capability and generality affect the level of risk of a system and how trade-offs can be found with performance?
- How do AI system feature for example ethics, explainability, transparency, and accountability relate to, or contribute to, its safety?
- How to evaluate AI safety?
- How to safeguard GenAI/LLMs/ML?
The main interest of the proposed workshop is to look at a new perspective of system engineering where multiple disciplines such as AI and safety engineering must holistically be considered together with ethical and legal issues to build trustable intelligent autonomous machines.
List of Topics
We invite theoretical, experimental and position papers covering any aspect of AI Safety including, but not limited to:
- Safety in AI-based system architectures
- Continuous V&V and predictability of AI safety properties
- Runtime monitoring and (self-)adaptation of AI safety
- Accountability, responsibility and liability of AI-based systems
- Explainable AI and interpretable AI
- Detection and mitigation of AI safety risks
- Avoiding negative side effects in AI-based systems
- Role and effectiveness of oversight: corrigibility and interruptibility
- Loss of values and the catastrophic forgetting problem
- Confidence, self-esteem and the distributional shift problem
- Safety of AGI systems and the role of generality
- Reward hacking and training corruption
- Self-explanation, self-criticism and the transparency problem
- Human-machine interaction safety
- Regulating AI-based systems: safety standards and certification
- Human-in/on/out of-the-loop and the scalable oversight problem
- Mixed-initiative control frameworks
- Evaluation platforms for AI safety
- AI safety education and awareness
- Experiences in AI-based safety-critical systems, including sectors like industrial, health, automotive, aerospace, robotics, among others
- Approaches to comply with recent AI regulations and standards focusing on safety-related properties validation like robustness, stability, reliability and controllability
Attendance Policy
AISafety is strictly planned and MUST be conducted as an IN-PERSON-ONLY event.
Given that on-line streaming may not be ensured and that the overhead necessary to support an hybrid workshop mode is not acceptable, all presenters should not consider virtual presentation as a possible mean and, therefore, they should foresee an on-site, face-to-face participation.
In exceptional cases, the organizing committee shall decide whether additional means for participation are acceptable (e.g. presentation by a third party attending the conference).
All authors submitting a paper (Full/Position) or Technical talk proposal implicitly agree with this in-person-only attending policy.
Submission Guidelines
You are invited to submit:
- Full technical papers (7-9 pages, including references, annexes, appendixes or any other relevant material),
- Proposals for technical Talks (up to one-page abstract including short Bio of the main speaker), without associated paper,
- Position papers (5-7 pages, including references, annexes, appendixes or any other relevant material).
Manuscripts must be submitted as PDF files via EasyChair online submission system: https://easychair.org/conferences/?conf=aisafety2024
Please keep your paper format according to CEUR Formatting Instructions (two-column format). The CEUR author kit can be downloaded from: http://ceur-ws.org/Vol-XXX/CEURART.zip
Papers will be peer-reviewed by the Program Committee (2-3 reviewers per paper). The workshop follows a single-blind reviewing process. However, we will also accept anonymized submissions.
The workshop proceedings will be published on CEUR-WS (http://ceur-ws.org/). CEUR-WS is “archival” in the sense that a paper cannot be removed once it’s published. Authors will keep the copyright of their papers as per CC BY 4.0. In other words, CEUR-WS is similar to Arxiv. In any case, authors of accepted papers can opt out and decide not to include their paper in the proceedings. We will inform the authors about the procedure in due term.
We are happy to receive papers that have not been accepted for IJCAI, and we welcome the review comments if the authors want to send them as additional material (NB. reviews/comments will not be included in the submitted/final manuscript).
For further and up-to-date information on submission guidelines please look at the website: https://www.aisafetyw.org/
Important Dates
- Papers Submission: May 10, 2024 – midnight, AOE time (extended)
- Acceptance Notification: June 04, 2024 – midnight, AOE time (extended)
- Camera Ready Submission: June 18, 2024 – midnight, AOE time (extended)
Committees
Organizing committee
- Gabriel Pedroza, Ansys, France
- Xiaowei Huang, University of Liverpool, UK
- Xin Cynthia Chen, ETH Zurich, Switzerland
- Fabio Arnez, CEA LIST, France
Steering committee
- Huascar Espinoza, Chips JU, Belgium
- José Hernández-Orallo, Universitat Politècnica de València, Spain
- Mauricio Castillo-Effen, Lockheed Martin, USA
- Richard Mallah, Future of Life Institute, USA
- John McDermid, University of York, UK
- Andreas Theodorou, Universitat Politècnica de Catalunya, Spain
Program Committee
- Please look at the Website: https://www.aisafetyw.org/
Contact
All questions about submissions should be emailed to: aisafety2024 at easychair dot org