AITrust workshop at WASP-HS 2024

a cross-disciplinary forum for advancing the designing, developing and deploying of reliable and trustworthy AI applications.

Venue

  • Offline Participants: Exact address of workshop place: University of Gothenburg, Gothenburg, Sweden. Hus Patricia. Forskningsgången 6, Torg Blå (Floor 4)
  • Online Participants: The Zoom Webinar link to join the event can be found here.
Quick links
Program
Invited speakers
- Krister Lindén, Finland
- Virginia Dignum, Sweden
Call for papers
Submission information
Important dates
Program committee
Organizers

Program

Full program of the event on Nov 19, 2024 (8:15 - 12:00)

Order Title Authors Time Frame Chair
1 Workshop Registration   08:15 - 09:00  
2 Opening Remarks   09:00 - 09:05 Xuan-Son (Sonny) Vu
3 Invited Speaker: Processing Personal or Sensitive Data in the Language Bank of Finland Dr. Krister Lindén, Research Director, Department of Digital Humanities 09:05 - 09:35 Chair: Elena Volodina
4 Responsible design of financial robo-advisors - a multidisciplinary approach Esteban Guerrero, Gökhan Buturak, Panu Kalmi 09:35 - 10:05 Session 1: Applications
Session Chair: Simon Dobnik
5 An Extensible Framework for Real-Time Conversational Avatars Somayeh Jafaritazehjani, Khanh-Tung Tran, Hoang Quan Nguyen, Xuan-Son Vu, Johanna Björklund    
6 Transformer-assisted Hate Crime Classification and Estimation in Sweden Hollis Sargeant, Hannes Waldetoft, Måns Magnusson    
7 Trustworthy AI in the public sector: Insights and recommendations grounded in a case study Alexander Berman, Karl de Fine Licht, Vanja Carlsson    
  COFFEE BREAK   10:05 - 10:25  
8 AI for open research data with Grandma Karl Maria Irena Szawerna, Simon Dobnik, Therese Lindström Tiedemann, Lisa Södergård, Ricardo Muñoz Sánchez, Xuan-Son Vu, Elena Volodina 10:25 - 10:50 Session 2: Theoretical / CoreNLP
Session Chair: Lili Jiang
9 Intersectional Hallucinations in Structured Synthetic Data Ericka Johnson, Saghi Hajisharif    
10 Mitigate Unfairness in Adversarial Robustness of Deep Neural Networks Seyedhamidreza Mousavi, Seyedali Mousavi, Masoud Daneshtalab    
11 Trustworthy AI and Misplaced Trust Ethical Implications from the Philosophy and Moral Psychology of Trust Dorna Behdadi 10:50 - 11:15 Session 3: Theory + Linguistics
Session Chair: Therese Lindström Tiedemann
12 AI’s truth is not my truth and what language has to do with it Martina Wiltschko & Preeti Kumari    
13 Explainability of Artificial Intelligence Systems: Mapping the Academic Discourse Signe Skov    
  SHORT BREAK   11:15 - 11:20  
14 Invited Speaker: The AI paradoxes Prof. Virginia Dignum, Umeå University and a member of the United Nation’s AI Body 11:20 - 11:50 Chair: Lili Jiang
15 Closing Remarks   11:50 - 12:00 Xuan-Son (Sonny) Vu

Invited speakers

Krister Lindén, Research Director, Department of Digital Humanities
Processing Personal or Sensitive Data in the Language Bank of Finland
Many corpora are publicly accessible, but the FAIR principles do not require every corpus to be openly available as long as there is an openly known path to access it. This allows for other ways to protect data sets than anonymisation or pseudonymisation, e.g. using password protected log-in and encryption. The Language Bank of Finland is a service for researchers using language resources across digital humanities and social sciences. The Language Bank provides online means to apply for the rights to use restricted resources. In this presentation, we will look at case studies where we collected speech data so that we can also share them with industry, and on how we can share research data collected from the dark web with academic researchers. The Language Bank has a wide variety of text and speech corpora and tools for studying them. The corpora can be analysed and processed with the Language Bank’s tools or downloaded. The Language Bank is coordinated by the national FIN-CLARIN consortium formed by Finnish universities and other research organizations. FIN-CLARIN is a part of the international CLARIN ERIC research infrastructure. Researchers and research groups can agree with FIN-CLARIN on depositing and distributing their own material. FIN-CLARIN also enables access to the whole CLARIN community’s language resources. Using the Language Bank is free for researchers and students.
BIO
Dr. Krister Lindén is Research Director of Language Technology at the Department for Digital Humanities of the University of Helsinki. He is National Coordinator of FIN-CLARIN, Director of FIN-CLARIAH, and serves as Chair of the National Coordinators of the European Research Infrastructure Consortium CLARIN. Lindén is familiar with current methods and branches in language and speech technology. He has directed a number of research projects funded by the Research Council of Finland and is Vice-Team Leader of the Centre of Excellence in Ancient Near Eastern Empires. In addition to having developed software for processing resources for the national languages of Finland, he has published more than 190 peer-reviewed scientific publications. He is an advisor for the National Library of Finland, the Centre for the Languages of Finland, as well as several commercial companies.
Prof. Virginia Dignum, Umeå University, Sweden
The AI paradoxes
This talk discusses the often contradictory nature of AI, exploring how its advancements highlight the irreplaceable qualities of human intelligence and the importance of governance. We’ll use key paradoxes, such as the Agreement Paradox, which questions why the more we discuss AI, the less we seem to agree on what it is. We’ll also examine the Intelligence Paradox, revealing how AI’s capabilities underscore what makes human intelligence unique. Furthermore, we’ll tackle the Justice Paradox, addressing the challenge of achieving true fairness with AI, and the Regulation Paradox, which focuses on balancing innovation and oversight in the AI era. All in all, an exploration of how paradoxes can help us uncover how AI shapes our world and how we can ensure it serves humanity ethically and equitably.
BIO
Virginia Dignum is Professor of Responsible Artificial Intelligence at Umeå University, Sweden where she leads the AI Policy Lab. She is also senior advisor on AI policy to the Wallenberg Foundations. She has a PHD in Artificial Intelligence from Utrecht University in 2004, is member of the Royal Swedish Academy of Engineering Sciences (IVA), and Fellow of the European Artificial Intelligence Association (EURAI). She is a member of the United Nations Advisory Body on AI, the Global Partnership on AI (GPAI), UNESCO’s expert group on the implementation of AI recommendations, OECD’s Expert group on AI, founder of ALLAI, the Dutch AI Alliance, and co-chair of the WEF’s Global Future Council on AI. She was a member of EU’s High Level Expert Group on Artificial Intelligence and leader of UNICEF’s guidance for AI and children. Her new book “The AI Paradox” is planned for publication in late 2024.

Call for one-page abstracts

The wider adoption of machine learning (ML) and artificial intelligence (AI) make several applications successful across societies such as healthcare, finance, robotics, transportation and industry operations, by inducing intelligence in real-time. It is desirable to design and develop reliable and trustworthy AI applications that offer trustworthy services to users, especially in high-stakes decision making. For instance, AI-assisted robotic surgery, automated financial trading, autonomous driving and many more modern applications are vulnerable to reidentification attacks, concept drifts, dataset shifts, misspecifications, misconfiguration of AI algorithms, perturbations, and adversarial attacks beyond human or even machine comprehension level, thereby posing dangerous threats to various stakeholders at different levels. Moreover, building trustworthy AI systems requires intense multi-party efforts in addressing different mechanisms and approaches that could enhance user and public trust. To name a few, the following topics are of interest in trustworthy AI: (i) privacy preservation, (ii) bias and fairness, (iii) robust mitigation of adversarial attacks, (iv) improved privacy and security in model building, (v) ethical AI, (vi) model attribution and (vii) scalability of models under adversarial settings. All of these topics are important and need to be addressed.

This workshop covers new results and on-going work in AI to address challenges for ensuring reliability in trustworthy systems. The challenges in different AI systems include, , but are not limited to (i) data collection and use, (ii) data sharing and aggregation, (iii) re-identification, and (iv) secure and private learning. Nonetheless, all aspects of AI systems that can deal with reliable, robust and secure issues are within the scope of this workshop. It will focus on robustness and performance guarantees, as well as consistency, transparency and safety of AI which are vital to ensure reliability. The workshop brings together experts from academics and industries and inspires discussion on building trustworthy AI systems including developing and assessing theoretical and empirical methods, practical applications, initializing new ideas and identifying directions for future studies. Original contributions, on-going work, as well as comparative studies among different methods, are all welcome.

Participants are encouraged to submit a one-page abstract about the research work that they will present at the workshop. Depending on the number of submissions and topics of interest, we will arrange short talks to present selected works. All authors of accepted abstracts should prepare a poster to present at the workshop to foster discussions among participants, regardless of whether the papers are selected for short talks.

Topics of the workshop include, but are not limited to:

  • Robustness of machine learning/deep learning/reinforcement learning algorithms and trustworthy systems in general.
  • Confidence, consistency, and uncertainty in model predictions for reliability beyond robustness.
  • Transparent AI concepts in data collection, model development, deployment and explainability.
  • Adversarial attacks - evasion, poisoning, extraction, inference, and hybrid.
  • New solutions to make a system robust and secure to novel or potentially adversarial inputs; to handle model misspecification, corrupted training data, addressing concept drifts, dataset shifts, and missing/manipulated data instances.
  • Theoretical and empirical analysis of reliable/robust/secure ML methods.
  • Comparative studies with competing methods without reliable/robust certified properties.
  • Applications of reliable/robust machine learning algorithms in domains such as healthcare, biomedical, finance, computer vision, natural language processing (LLMs/LVMs), big data, and all other relevant areas.
  • Unique societal and legal challenges facing reliability for trustworthy AI systems.
  • Secure learning from data having high missing values, incompleteness, noisy
  • Private learning from sensitive and protected data

Submission Information

Program committee

  • Elena Volodina, University of Gothenburg, Sweden
  • Johanna Björklund, Umeå university, Sweden
  • Simon Dobnik, University of Gothenburg, Sweden
  • Lili Jiang, Umeå university, Sweden
  • Xuan-Son Vu, Umeå university and DeepTensor AB, Sweden
  • Therese Lindström Tiedemann, University of Helsinki, Finland
  • Ricardo Muñoz Sánchez, University of Gothenburg, Sweden
  • Maria Irena Szawerna, University of Gothenburg, Sweden
  • Lisa, Södergård, University of Helsinki, Finland

Back to top

Dates

Important dates

  • Aug 23, 2024: First call for workshop one-page abstracts
  • September 6, 2024: One-page abstract submission deadline
  • September 12, 2024: Accepted abstract notification
  • September 15, 2024: Participant submissions
  • November 19, 2024: Event day (9:00 - 12:00)

Registration

(Updated on Sep 14, 2024) Please register for our workshop, ‘Privacy and AI: Towards a Trustworthy Ecosystem,’ using form [1], and for the main event using form [2].

Organizers

 
General chair
* Xuan-Son Vu, Umeå university and DeepTensor AB, Sweden
General co-chairs
* Lili Jiang, Umeå university, Sweden
* Elena Volodina, University of Gothenburg, Sweden
* Simon Dobnik, University of Gothenburg, Sweden
* Therese Lindström Tiedemann, University of Helsinki, Finland
* Johanna Björklund, Umeå university, Sweden
Organizing co-chairs
* Ricardo Muñoz Sánchez, University of Gothenburg, Sweden
* Maria Irena Szawerna, University of Gothenburg, Sweden
* Lisa Södergård University of Helsinki, Finland.
Contact mormor.karl@svenska.gu.se or aitrust-research@googlegroups.com
Anti-harassment policy
AITrust workshop adheres to the ACL’s anti-harassment policy https://www.aclweb.org/adminwiki/index.php?title=Anti-Harassment_Policy.
Acknowledgments
The workshop is organized within the research environment project Grandma Karl is 27 years old, WASP Media & Language, and STINT Projects.