AITrust workshop at WASP-HS 2024

a cross-disciplinary forum for advancing the designing, developing and deploying of reliable and trustworthy AI applications.

Dates and venue

Important dates

  • Aug 23, 2024: First call for workshop one-page abstracts
  • September 6, 2024: One-page abstract submission deadline
  • September 12, 2024: Accepted abstract notification
  • September 15, 2024: Participant submissions
  • November 19, 2024: Event day (9:00 - 12:00)

Registration

Please register for the main event WASP-HS 2024 Event and kindly select our workshop “Privacy and AI: Towards a Trustworthy Ecosystem” in the form.

Back to top

Call for one-page abstracts

The wider adoption of machine learning (ML) and artificial intelligence (AI) make several applications successful across societies such as healthcare, finance, robotics, transportation and industry operations, by inducing intelligence in real-time. It is desirable to design and develop reliable and trustworthy AI applications that offer trustworthy services to users, especially in high-stakes decision making. For instance, AI-assisted robotic surgery, automated financial trading, autonomous driving and many more modern applications are vulnerable to reidentification attacks, concept drifts, dataset shifts, misspecifications, misconfiguration of AI algorithms, perturbations, and adversarial attacks beyond human or even machine comprehension level, thereby posing dangerous threats to various stakeholders at different levels. Moreover, building trustworthy AI systems requires intense multi-party efforts in addressing different mechanisms and approaches that could enhance user and public trust. To name a few, the following topics are of interest in trustworthy AI: (i) privacy preservation, (ii) bias and fairness, (iii) robust mitigation of adversarial attacks, (iv) improved privacy and security in model building, (v) ethical AI, (vi) model attribution and (vii) scalability of models under adversarial settings. All of these topics are important and need to be addressed.

This workshop covers new results and on-going work in AI to address challenges for ensuring reliability in trustworthy systems. The challenges in different AI systems include, , but are not limited to (i) data collection and use, (ii) data sharing and aggregation, (iii) re-identification, and (iv) secure and private learning. Nonetheless, all aspects of AI systems that can deal with reliable, robust and secure issues are within the scope of this workshop. It will focus on robustness and performance guarantees, as well as consistency, transparency and safety of AI which are vital to ensure reliability. The workshop brings together experts from academics and industries and inspires discussion on building trustworthy AI systems including developing and assessing theoretical and empirical methods, practical applications, initializing new ideas and identifying directions for future studies. Original contributions, on-going work, as well as comparative studies among different methods, are all welcome.

Participants are encouraged to submit a one-page abstract about the research work that they will present at the workshop. Depending on the number of submissions and topics of interest, we will arrange short talks to present selected works. All authors of accepted abstracts should prepare a poster to present at the workshop to foster discussions among participants, regardless of whether the papers are selected for short talks.

Topics of the workshop include, but are not limited to:

  • Robustness of machine learning/deep learning/reinforcement learning algorithms and trustworthy systems in general.
  • Confidence, consistency, and uncertainty in model predictions for reliability beyond robustness.
  • Transparent AI concepts in data collection, model development, deployment and explainability.
  • Adversarial attacks - evasion, poisoning, extraction, inference, and hybrid.
  • New solutions to make a system robust and secure to novel or potentially adversarial inputs; to handle model misspecification, corrupted training data, addressing concept drifts, dataset shifts, and missing/manipulated data instances.
  • Theoretical and empirical analysis of reliable/robust/secure ML methods.
  • Comparative studies with competing methods without reliable/robust certified properties.
  • Applications of reliable/robust machine learning algorithms in domains such as healthcare, biomedical, finance, computer vision, natural language processing (LLMs/LVMs), big data, and all other relevant areas.
  • Unique societal and legal challenges facing reliability for trustworthy AI systems.
  • Secure learning from data having high missing values, incompleteness, noisy
  • Private learning from sensitive and protected data

Submission Information

Program committee

  • Elena Volodina, University of Gothenburg, Sweden
  • Johanna Björklund, Umeå university, Sweden
  • Simon Dobnik, University of Gothenburg, Sweden
  • Lili Jiang, Umeå university, Sweden
  • Xuan-Son Vu, Umeå university and DeepTensor AB, Sweden
  • Therese Lindström Tiedemann, University of Helsinki, Finland
  • Ricardo Muñoz Sánchez, University of Gothenburg, Sweden
  • Maria Irena Szawerna, University of Gothenburg, Sweden
  • Lisa, Södergård, University of Helsinki, Finland

Back to top

Organizers

 
General chair
* Xuan-Son Vu, Umeå university and DeepTensor AB, Sweden
General co-chairs
* Lili Jiang, Umeå university, Sweden
* Elena Volodina, University of Gothenburg, Sweden
* Simon Dobnik, University of Gothenburg, Sweden
* Therese Lindström Tiedemann, University of Helsinki, Finland
* Johanna Björklund, Umeå university, Sweden
Organizing co-chairs
* Ricardo Muñoz Sánchez, University of Gothenburg, Sweden
* Maria Irena Szawerna, University of Gothenburg, Sweden
* Lisa Södergård University of Helsinki, Finland.
Contact mormor.karl@svenska.gu.se or aitrust-research@googlegroups.com
Anti-harassment policy
AITrust workshop adheres to the ACL’s anti-harassment policy https://www.aclweb.org/adminwiki/index.php?title=Anti-Harassment_Policy.
Acknowledgments
The workshop is organized within the research environment project Grandma Karl is 27 years old, WASP Media & Language, and STINT Projects.