Amir H. Karimi

Assistant Professor

O'Donovan Chair in Trustworthy AI

Electrical & Computer Engineering, University of Waterloo

Cheriton School of Computer Science (cross-appointment)

Faculty Affiliate, Vector Institute

amirh.karimi [at] uwaterloo.ca

Bio

Dr. Amir-Hossein Karimi is an award-winning researcher and educator, and an Assistant Professor in the Department of Electrical and Computer Engineering and the Cheriton School of Computer Science (cross-appointed) at the University of Waterloo, and a Vector Institute Faculty Affiliate. He leads the Collaborative Human-AI Reasoning Machines (✨CHARM✨) Lab, dedicated to advancing safe and trustworthy human-AI collaborations. Prior to joining Waterloo, Dr. Karimi accumulated significant industry experience at leading tech companies such as BlackBerry (2013), Meta (Facebook) (2014-6), Google Brain (2021), and DeepMind (2022), and provided consulting services for various startups and incubators including for NEXT AI. His contributions have earned him multiple accolades, such as the UofToronto Spirit of Engineering Science Award (2015), the UWaterloo Alumni Gold Medal Award (2018), the NSERC Canada Graduate Scholarship - Doctorate (2018), the Google PhD Fellowship (2021), the ETH Zurich Medal (2024), the NSERC Discovery Grants & Supplements (2024), and the Igor Ivkovic Teaching Excellence Award (2024).

✨CHARM Lab✨

As AI becomes more embedded in everyday life, e.g., healthcare, education, finance, and transportation, our dependency on these systems grows, and so does the risk of potential consequential errors. The mandate of the Collaborative Human-AI Reasoning Machines (✨CHARM✨) Lab is to enhance the integration of AI systems into human decision-making, to build systems that allow users to understand 🔍, challenge ⚖️, and improve 🚀 AI decisions. The CHARM Lab brings expertise in explainable AI, causal inference, and neuro-symbolic approaches, and collaborates with leading experts in such fields as social psychology, cognitive science, human-computer interaction, multi-agent reinforcement learning, game theory, and behavioral economics.

Prospective Students

The lab is always on the lookout for exceptional and highly motivated students/visitors across all levels (bachelor's, master's, doctoral, postdoctoral). If you are passionate about building the future of trustworthy human-AI symbiosis, and have a strong background in machine learning, computer science, or related fields please fill out this form.

Current Members
Amir-Hossein Karimi

Amir-Hossein Karimi

Principal Investigator (PI)

Mohammad Hadi Sepanj

Mohammad Hadi Sepanj

Postdoctoral Fellow

Zahra Khotanlou

Zahra Khotanlou

Master's Student

Farzan Mirshekari

Farzan Mirshekari

Undergraduate Student
(w/ Prof. Tahvildari)

Eugene Yu

Eugene Yu

Postdoctoral Fellow
(w/ Prof. Grossmann)

Maryam Ghorbansabagh

Maryam Ghorbansabagh

Master's Student
(w/ Prof. Grossmann)

Ahmed Abdelaal

Ahmed Abdelaal

PhD Student

Dongzhuyuan Lu

Dongzhuyuan Lu

Master's Student

Tom Wielemaker

Tom Wielemaker

Undergraduate Student

Hosna Oyarhoseini

Hosna Oyarhoseini

Master's Student
(w/ Prof. Lin)

Mina Kebriaee

Mina Kebriaee

PhD Student
(w/ Prof. Tahvildari)

Lab Alumni
Hamdi Altaheri

Hamdi Altaheri

Postdoctoral Fellow
(w/ Prof. Karray)
(next: King Saud University)

Zachary Wu

Zachary Wu

Master's Student
(w/ Prof. Tahvildari)

Hamza Mostafa

Hamza Mostafa

Undergraduate Student
(next: OpenAI Inc.)

Abubakar Bello

Abubakar Bello

Undergraduate Student
(next: Microsoft Inc.)

Mohammadreza Alavi

Mohammadreza Alavi

Undergraduate Student

Past Mentees
Ahmad Ehyaei

Ahmad Ehyaei

PhD Student
(w/ Prof. Farnadi)
(next: Intl. Max Planck Research Schools)

Miriam Rateike

Miriam Rateike

PhD Student
(w/ Prof. Valera)
(next: Google PhD Fellow 2023)

Ricardo Dominguez-Olmedo

Ricardo Dominguez-Olmedo

Master's Student
(w/ Prof. Schölkopf)
(next: Google PhD Fellow 2026)

Kiarash Mohammadi

Kiarash Mohammadi

Master's Student
(w/ Prof. Valera)
(next: MILA AI Institute)

Alexandra Walter

Alexandra Walter

Master's Student
(w/ Prof. Valera)
(next: Helmholtz Data Science School of Health)

Publications

Dr. Karimi's scholarly contributions have been showcased almost exclusively at top-tier AI and ML venues including NeurIPS, ICML, AAAI, AISTATS, ACM FAccT, and ACM AIES. He has authored influential publications such as a comprehensive survey paper in the prestigious ACM Computing Surveys, holds a patent, and is a contributing author of a book chapter. Dr. Karimi’s work on algorithmic recourse has notably elevated its prominence in responsible AI research, with its presence growing from almost none to hundreds on Google Scholar in just five years; algorithmic recourse is now a mandatory criterion in key sectors, including Canada’s Treasury Board Directive on Automated Decision-Making. Several of Dr. Karimi's papers have received over 100 citations each. Dedicated to knowledge mobilization and reproducibility, his open-source code has garnered over 100 GitHub stars.

Most recent publications are available on Google Scholar.
indicates equal contribution.

📝 Explainable AI is Causality in Disguise
✍️ Karimi
🏛️ EurIPS Theory of XAI Workshop
TL;DR: Solving XAI requires us to make explicit our causal assumptions of the world
📝 From Individual to Multi-Agent Algorithmic Recourse: Minimizing the Welfare Gap via Capacitated Bipartite Matching
✍️ Khotanlou, Larson, Karimi
TL;DR: Providing recourse to any one individual affects what recourse can be provided to other individuals
📝 Imagining and building wise machines: The centrality of AI metacognition
✍️ Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann
🏛️ Trends in Cognitive Sciences
TL;DR: AI systems should integrate metacognitive strategies to enhance robustness, explainability, cooperation, and safety in complex real-world scenarios
📝 Temporal Convolutional Transformer for EEG Based Motor Imagery Decoding
✍️ Altaheri, Karray, Karimi
🏛️ Nature Scientific Reports
TL;DR: TCFormer leverages CNNs, transformers, and temporal convolutions to decode EEG motor imagery, significantly outperforming existing BCI approaches
📝 On the Relationship Between Explanation and Prediction: A Causal View
✍️ Karimi, Muandet, Kornblith, Schölkopf, Kim
🏛️ ICML 2023 (acceptance rate: 27.9%)
TL;DR: do explanations actually explain the predictions? yes, among other things....
📝 A Survey of Algorithmic Recourse: Contrastive Explanations & Consequential Recommendations
✍️ Karimi, Barthe, Schölkopf, Valera
🏛️ ACM CSUR 2022 (impact factor: 23.8)
TL;DR: definitions, formulations, solutions, and prospects for recourse research
📝 Key-Value Memory Networks for Directly Reading Documents
✍️ Miller, Fisch, Dodge, Karimi, Bordes, Weston
🏛️ EMNLP 2016 (acceptance rate: 24.3%)
TL;DR: use key-value pairs to efficiently retrieve and store information for improved document comprehension
📝 Explainable AI is Causality in Disguise
✍️ Karimi
🏛️ EurIPS Theory of XAI Workshop
TL;DR: Solving XAI requires us to make explicit our causal assumptions of the world
📝 Imagining and building wise machines: The centrality of AI metacognition
✍️ Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann
🏛️ Trends in Cognitive Sciences
TL;DR: AI systems should integrate metacognitive strategies to enhance robustness, explainability, cooperation, and safety in complex real-world scenarios
📝 Prospector Heads: Generalized Feature Attribution for Large Models & Data
✍️ Machiraju, Derry, Desai, Guha, Karimi, Zou, Altman, Ré, Mallick
🏛️ ICML 2024 (acceptance rate: 27.5%)
📝 On the Relationship Between Explanation and Prediction: A Causal View
✍️ Karimi, Muandet, Kornblith, Schölkopf, Kim
🏛️ ICML 2023 (acceptance rate: 27.9%)
TL;DR: do explanations actually explain the predictions? yes, among other things....
📝 Scaling Guarantees for Nearest Counterfactual Explanations
✍️ Mohammadi, Karimi, Barthe, Valera
🏛️ ACM AIES 2021 (acceptance rate: 38.0%)
TL;DR: generate optimal counterfactual explanations using mixed-integer linear programs
📝 Model-Agnostic Counterfactual Explanations for Consequential Decisions
✍️ Karimi, Barthe, Valera
🏛️ AISTATS 2019 (acceptance rate: 32.4%)
TL;DR: generate optimal counterfactual explanations using satisfiability solvers
📝 From Individual to Multi-Agent Algorithmic Recourse: Minimizing the Welfare Gap via Capacitated Bipartite Matching
✍️ Khotanlou, Larson, Karimi
TL;DR: Providing recourse to any one individual affects what recourse can be provided to other individuals
📝 Robustness Implies Fairness in Causal Algorithmic Recourse
✍️ Ehyaei, Karimi, Schölkopf, Maghsudi
🏛️ ACM FAccT 2023 (acceptance rate: 24.6%)
TL;DR: as the title says...
📝 On the Robustness of Causal Algorithmic Recourse
✍️ Dominguez-Olmedo, Karimi, Schölkopf
🏛️ ICML 2022 (acceptance rate: 21.9%)
TL;DR: robustness of recourse is separate from robustness of prediction
📝 On the Fairness of Causal Algorithmic Recourse
✍️ von Kügelgen, Karimi, Bhatt, Valera, Weller, Schölkopf
🏛️ AAAI 2022 (acceptance rate: 15.0%)
TL;DR: fairness of recourse is separate from fairness of prediction
📝 A Survey of Algorithmic Recourse: Contrastive Explanations & Consequential Recommendations
✍️ Karimi, Barthe, Schölkopf, Valera
🏛️ ACM CSUR 2022 (impact factor: 23.8)
TL;DR: definitions, formulations, solutions, and prospects for recourse research
📝 Algorithmic Recourse under Imperfect Causal Knowledge: a Probabilistic Approach
✍️ Karimi, von Kügelgen, Schölkopf, Valera
🏛️ NeurIPS 2020 (acceptance rate: 20.1%)
TL;DR: causal assumptions are rarely available; instead generate recourse probabilistically
📝 Algorithmic Recourse: from Counterfactual Explanations to Interventions
✍️ Karimi, Schölkopf, Valera
🏛️ ACM FAccT 2021 (acceptance rate: 25.0%)
TL;DR: real-world causal assumptions are needed for counterfactual explanations & recourse
📝 Scaling Guarantees for Nearest Counterfactual Explanations
✍️ Mohammadi, Karimi, Barthe, Valera
🏛️ ACM AIES 2021 (acceptance rate: 38.0%)
TL;DR: generate optimal counterfactual explanations using mixed-integer linear programs
📝 Model-Agnostic Counterfactual Explanations for Consequential Decisions
✍️ Karimi, Barthe, Valera
🏛️ AISTATS 2019 (acceptance rate: 32.4%)
TL;DR: generate optimal counterfactual explanations using satisfiability solvers
📝 Explainable AI is Causality in Disguise
✍️ Karimi
🏛️ EurIPS Theory of XAI Workshop
TL;DR: Solving XAI requires us to make explicit our causal assumptions of the world
📝 Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces
✍️ Ehyaei, Mohammadi, Karimi, Samadi, Farnadi
🏛️ AAAI 2023 (acceptance rate: 23.7%)
📝 On the Relationship Between Explanation and Prediction: A Causal View
✍️ Karimi, Muandet, Kornblith, Schölkopf, Kim
🏛️ ICML 2023 (acceptance rate: 27.9%)
TL;DR: do explanations actually explain the predictions? yes, among other things....
📝 On Data Manifolds Entailed by Structural Causal Models
✍️ Dominguez-Olmedo, Karimi, Arvanitidis, Schölkopf
🏛️ ICML 2023 (acceptance rate: 27.9%)
📝 Robustness Implies Fairness in Causal Algorithmic Recourse
✍️ Ehyaei, Karimi, Schölkopf, Maghsudi
🏛️ ACM FAccT 2023 (acceptance rate: 24.6%)
TL;DR: as the title says...
📝 On the Robustness of Causal Algorithmic Recourse
✍️ Dominguez-Olmedo, Karimi, Schölkopf
🏛️ ICML 2022 (acceptance rate: 21.9%)
TL;DR: robustness of recourse is separate from robustness of prediction
📝 On the Fairness of Causal Algorithmic Recourse
✍️ von Kügelgen, Karimi, Bhatt, Valera, Weller, Schölkopf
🏛️ AAAI 2022 (acceptance rate: 15.0%)
TL;DR: fairness of recourse is separate from fairness of prediction
📝 A Survey of Algorithmic Recourse: Contrastive Explanations & Consequential Recommendations
✍️ Karimi, Barthe, Schölkopf, Valera
🏛️ ACM CSUR 2022 (impact factor: 23.8)
TL;DR: definitions, formulations, solutions, and prospects for recourse research
📝 Algorithmic Recourse under Imperfect Causal Knowledge: a Probabilistic Approach
✍️ Karimi, von Kügelgen, Schölkopf, Valera
🏛️ NeurIPS 2020 (acceptance rate: 20.1%)
TL;DR: causal assumptions are rarely available; instead generate recourse probabilistically
📝 Algorithmic Recourse: from Counterfactual Explanations to Interventions
✍️ Karimi, Schölkopf, Valera
🏛️ ACM FAccT 2021 (acceptance rate: 25.0%)
TL;DR: real-world causal assumptions are needed for counterfactual explanations & recourse
📝 Imagining and building wise machines: The centrality of AI metacognition
✍️ Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann
🏛️ Trends in Cognitive Sciences
TL;DR: AI systems should integrate metacognitive strategies to enhance robustness, explainability, cooperation, and safety in complex real-world scenarios
📝 SRP: Efficient class-aware embedding learning for large-scale data via supervised random projections
✍️ Karimi, Wong, Ghodsi
🏛️ arXiv
📝 Ensembles of Random Projections for Nonlinear Dimensionality Reduction
✍️ Karimi, Shafiee, Ghodsi, Wong
🏛️ CVIS 2017 (acceptance rate: 40.0%)
📝 Synthesizing Deep Neural Network Architectures using Biological Synaptic Strength Distributions
✍️ Karimi, Shafiee, Ghodsi, Wong
🏛️ CCN 2017 (acceptance rate: 40.0%)
📝 Temporal Convolutional Transformer for EEG Based Motor Imagery Decoding
✍️ Altaheri, Karray, Karimi
🏛️ Nature Scientific Reports
TL;DR: TCFormer leverages CNNs, transformers, and temporal convolutions to decode EEG motor imagery, significantly outperforming existing BCI approaches
📝 FEELS: a Full-Spectrum Enhanced Emotion Learning System for Assisting People with Autism Spectrum Disorder
✍️ Karimi, Boroomand, Pfisterer, Wong
🏛️ CVIS 2018 (acceptance rate: 40.0%)
📝 Discovery Radiomics via a Mixture Sequencers for Multi-Parameter MRI
✍️ Karimi, Chung, Shafiee, Khalvati, Haider, Ghodsi, Wong
🏛️ ICIAR 2017 (acceptance rate: 40.0%)
📝 Automated detection and cell density assessment of keratocytes in the human corneal stroma from ultrahigh resolution optical coherence tomograms
✍️ Karimi, Wong, Bizheva
🏛️ Biomedical Optics Express (impact factor: 2.9)
📝 Distance Correlation Autoencoder
✍️ Wang, Karimi, Ghodsi
🏛️ IJCNN 2018 (acceptance rate: 22.7%)
📝 Key-Value Memory Networks for Directly Reading Documents
✍️ Miller, Fisch, Dodge, Karimi, Bordes, Weston
🏛️ EMNLP 2016 (acceptance rate: 24.3%)
TL;DR: use key-value pairs to efficiently retrieve and store information for improved document comprehension
📝 Spatio-temporal Saliency Detection using Abstracted Fully-Connected Graphical Models
✍️ Karimi, Shafiee, Scharfenberger, BenDaya, Haider, Talukdar, Clausi, Wong
🏛️ ICIP 2016 (acceptance rate: 40.0%)

Teaching

Dr. Karimi enjoys teaching diverse audiences, covering broad topics in AI for general audiences, courses on ML for undergrad/grad students, and specialized courses for practitioners. In his first teaching term at the University of Waterloo, he was honored with the 2024 Igor Ivkovic Teaching Excellence Award. Dr. Karimi has been invited to present talks, lectures, and tutorials at institutions such as MIT, Harvard, ETH Zurich, MILA, University College London, Cyber Valley Health, Institute of Mathematical Statistics, NEC Europe Labs, DeepMind, and Google Brain. He gave notable tutorials at KDD 2023 (on Causal Explainable AI) and at the Toronto ML Summit 2024 (on Algorithmic Recourse). To book Dr. Karimi for teaching engagements at your organization or event, reach out here.
For everyone...
From AI to Life Lessons
From AI to Life Lessons
Medium Blog
Success Studio
BBC Podcast
AI for Autism
For students...
Machine Learning Essentials
Machine Learning Essentials
CausEthical ML
CausEthical ML
Dimensionality Reduction
Dimensionality Reduction
For professionals...
Machine Learning Essentials
Machine Learning Essentials
Toronto ML Summit Tutorial
Algorithmic Recourse
KDD Tutorial
Causal Explainable AI

Vitæ

Full CV in PDF. Having immigrated five times across three continents for studies and work, from Iran to Canada, and then onward to the USA, Germany, Switzerland, and the UK, Dr. Amir-Hossein Karimi has accumulated over 15 years of technical experience in both research, industry, and startup roles. Now an Assistant Professor at the University of Waterloo and a Vector Institute Faculty Affiliate, he has held prominent research positions at Google DeepMind, Google Brain, alongside software engineering roles at BlackBerry and Meta (Facebook). His academic journey includes a Ph.D. at the Max Planck Institute & ETH Zürich, with a focus on causal inference and explainable AI.

Press, Funding, & Consultations

Dr. Karimi is grateful for the generous funding support from the University of Waterloo, NSERC, CIFAR, Google, Waterloo.AI, and the O'Donovan Family, enabling his team to push the boundaries of human-AI research.

In addition to his academic contributions, Dr. Karimi has consulted internationally for several startups, leveraging his expertise for discovery, product-market fit, and scaling of operations, in addition to fundraising and grant writing. He is available to discuss how his research and experience can deliver value to your organization and stakeholders. Book a time via this link.

For press inquiries, feel free to reach out.