The 1st International Workshop on Assessment with New methodologies, Unified Benchmarks, and environments, of Intrusion detection and response Systems (ANUBIS) will take place in Toulouse, France in September 2025. ANUBIS is co-located with the 30th European Symposium on Research in Computer Security (ESORICS 2025).

In the face of the humongous volume of publications in the field of intrusion detection and response, coupled with the lack of rigorous evaluation methodology of these (increasingly AI-based) methods, reproducibility is close to impossible. To remediate to that issue, ANUBIS offers the opportunity for researchers from different domains and communities to bring and discuss their evaluation methodology practices.

Evaluation is a fundamentally transverse topic, and multidisciplinary expertise about cybersecurity goals, technical domain constraints, and machine learning components is necessary to achieve fair, explainable, and trustworthy evaluation. As such, we are looking for submissions that deal with the methods, tools and techniques to evaluate security measures that aim to protect (computer) systems against intrusions. We welcome original papers submitted by researchers and practitioners from various backgrounds, such as security and privacy (incl. code audit or penetration testing), formal methods, experimental plateforms (incl. digital twins), machine learning and data mining.

Topics of Interest

The aim of ANUBIS is to bring together scientists involved in bringing new and better ways to evaluate intrusion detection and response systems used in various environments (IT, OT, and IoT and 5G/6G) and relying on various data (e.g., radio, system, and network). We invite researchers and practitioners to submit original papers focusing on:

  • Threat data collection software and methods
  • Evaluation of current and new security datasets
  • Privacy-preserving datasets collection
  • AI for synthetic data generation (legitimate, malicious and mixed workloads)
  • Data representation for security
  • Methodology, benchmark, metrics, formal methods, and tools for datasets or security tools evaluation
  • Evaluation in dynamic environments and concept drift analysis
  • Platforms, learning environments, digital twins, and software for reproducible experiments
  • Evaluation of AI approaches for intrusion detection and response, such as reinforcement learning and federated learning

Submission Guidelines

The workshop accepts original research work and work-in-progress, not substantially overlapping with previous publications or concurrent submissions, as either:

  • long papers: at most 16 pages (using 10-point font), excluding the bibliography and well-marked appendices, or
  • short papers: at most 8 pages (using 10-point font), excluding the bibliography and well-marked appendices.

Submitted papers must follow the LNCS template from the time they are submitted. ANUBIS follows a double-blind review process and all papers that are not desk-rejected will be reviewed by two to three experts.

Important dates

  • Submission deadline: 7 June 2025 AoE
  • Notification to authors: 17 July 2025 AoE
  • Camera-ready version: 28 August 2025 AoE