Dr. Karimi's scholarly contributions have been showcased almost exclusively at top-tier AI and ML venues including NeurIPS, ICML, AAAI, AISTATS, ACM FAccT, and ACM AIES. He has authored influential publications such as a comprehensive survey paper in the prestigious ACM Computing Surveys, holds a patent, and is a contributing author of a book chapter.
Dr. Karimi’s work on algorithmic recourse has notably elevated its prominence in responsible AI research, with its presence growing from almost none to hundreds on Google Scholar in just five years; algorithmic recourse is now a mandatory criterion in key sectors, including Canada’s Treasury Board Directive on Automated Decision-Making.
Several of Dr. Karimi's papers have received over 100 citations each. Dedicated to knowledge mobilization and reproducibility, his open-source code has garnered over 100 GitHub stars.
Dr. Karimi has been invited to present talks, lectures, and tutorials at institutions such as MIT, Harvard, ETH Zurich, MILA, University College London, Cyber Valley Health, Institute of Mathematical Statistics, NEC Europe Labs, DeepMind, and Google Brain. He gave notable tutorials on Causal Explainable AI at KDD 2023 and Algorithmic Recourse at the Toronto ML Summit 2024.
Committed to fostering inclusivity and breaking down educational barriers, Dr. Karimi co-founded "Prince of AI," an initiative providing free AI education to over 30,000 learners. This platform offers similar content to an introductory AI/ML course through free posts, video reels, and webinars.
Prior to joining UWaterloo, Dr. Karimi accumulated significant industry experience at leading tech companies such as
BlackBerry (2013),
Meta (Facebook) (2014-6),
Google Brain (2021),
and DeepMind (2022),
and provided consulting services for various startups and incubators including for NEXT AI.
His contributions have earned him multiple accolades, such as the UofToronto Spirit of Engineering Science Award (2015), the UWaterloo Alumni Gold Medal Award (2018), the NSERC Canada Graduate Scholarship - Doctorate (2018), the Google PhD Fellowship (2021), the ETH Zurich Medal (2024), the NSERC Discovery Grants (2024), and the Igor Ivkovic Teaching Excellence Award (2024).
✨CHARM Lab✨
The mandate of the Collaborative Human-AI Reasoning Machines (✨CHARM✨) Lab is to enhance the integration of AI systems into human decision-making, ensuring they are not only powerful but also safe, reliable, and aligned with human values.
As AI becomes more embedded in everyday life, e.g., 🏥 healthcare, 🎓 education, 💼 finance, and 🚗 transportation, our dependency on these systems grows, and so does the risk of potential consequential errors.
Our mission is to develop AI systems that can detect 🕵️♂️ potential issues, correct 🛠️ mistakes, and ultimately perfect 🤝 human-AI partnership where humans and machines work together to achieve better outcomes. 🌍✨
The CHARM Lab focuses on causal inference, explainable AI, and neuro-symbolic approaches in order to build systems that allow users to understand, challenge, and improve AI decisions.
We occasionally also borrow insights from, and collaborate with leading experts in, such fields as social sciences, cognitive science, human-computer interaction, multi-agent reinforcement learning, game theory, and behavioral economics.
Prospective Students
The lab is always on the lookout for exceptional and highly motivated students/visitors across all levels (bachelor's, master's, doctoral, postdoctoral).
If you are passionate about building the future of trustworthy human-AI symbiosis, and have a strong background in machine learning, computer science, or related fields please fill out
this form.
📝 Imagining and building wise machines: The centrality of AI metacognition
✍️ Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann
✨ TL;DR: AI systems should integrate metacognitive strategies to enhance robustness, explainability, cooperation, and safety in complex real-world scenarios
📝 On the Relationship Between Explanation and Prediction: A Causal View
✍️ Karimi, Muandet, Kornblith, Schölkopf, Kim
🏛️ ICML 2023(acceptance rate: 27.9%) ✨ TL;DR: do explanations actually explain the predictions? yes, among other things....
📝 Key-Value Memory Networks for Directly Reading Documents
✍️ Miller, Fisch, Dodge, Karimi, Bordes, Weston
🏛️ EMNLP 2016(acceptance rate: 24.3%) ✨ TL;DR: use key-value pairs to efficiently retrieve and store information for improved document comprehension
📝 On the Robustness of Causal Algorithmic Recourse
✍️ Dominguez-Olmedo, Karimi, Schölkopf
🏛️ ICML 2022(acceptance rate: 21.9%) ✨ TL;DR: robustness of recourse is separate from robustness of prediction
📝 On the Fairness of Causal Algorithmic Recourse
✍️ von Kügelgen, Karimi, Bhatt, Valera, Weller, Schölkopf
🏛️ AAAI 2022(acceptance rate: 15.0%) ✨ TL;DR: fairness of recourse is separate from fairness of prediction
📝 On the Relationship Between Explanation and Prediction: A Causal View
✍️ Karimi, Muandet, Kornblith, Schölkopf, Kim
🏛️ ICML 2023(acceptance rate: 27.9%) ✨ TL;DR: do explanations actually explain the predictions? yes, among other things....
📝 On the Robustness of Causal Algorithmic Recourse
✍️ Dominguez-Olmedo, Karimi, Schölkopf
🏛️ ICML 2022(acceptance rate: 21.9%) ✨ TL;DR: robustness of recourse is separate from robustness of prediction
📝 On the Fairness of Causal Algorithmic Recourse
✍️ von Kügelgen, Karimi, Bhatt, Valera, Weller, Schölkopf
🏛️ AAAI 2022(acceptance rate: 15.0%) ✨ TL;DR: fairness of recourse is separate from fairness of prediction
📝 Imagining and building wise machines: The centrality of AI metacognition
✍️ Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann
✨ TL;DR: AI systems should integrate metacognitive strategies to enhance robustness, explainability, cooperation, and safety in complex real-world scenarios
📝 On the Relationship Between Explanation and Prediction: A Causal View
✍️ Karimi, Muandet, Kornblith, Schölkopf, Kim
🏛️ ICML 2023(acceptance rate: 27.9%) ✨ TL;DR: do explanations actually explain the predictions? yes, among other things....
📝 Imagining and building wise machines: The centrality of AI metacognition
✍️ Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann
✨ TL;DR: AI systems should integrate metacognitive strategies to enhance robustness, explainability, cooperation, and safety in complex real-world scenarios
📝 FEELS: a Full-Spectrum Enhanced Emotion Learning System for Assisting People with Autism Spectrum Disorder
✍️ Karimi, Boroomand, Pfisterer, Wong
🏛️ CVIS 2018(acceptance rate: 40.0%)
📝 Automated detection and cell density assessment of keratocytes in the human corneal stroma from ultrahigh resolution optical coherence tomograms
✍️ Karimi, Wong, Bizheva
🏛️ Biomedical Optics Express(impact factor: 2.9)
📝 Key-Value Memory Networks for Directly Reading Documents
✍️ Miller, Fisch, Dodge, Karimi, Bordes, Weston
🏛️ EMNLP 2016(acceptance rate: 24.3%) ✨ TL;DR: use key-value pairs to efficiently retrieve and store information for improved document comprehension
Dr. Karimi enjoys teaching diverse audiences, covering broad topics in AI for general audiences, courses on ML for undergrad/grad students, and specialized courses for practitioners. In his first teaching term at the University of Waterloo, he was honored with the 2024 Igor Ivkovic Teaching Excellence Award. Alongside the freely available content linked below, please reach out to book Dr. Karimi for teaching engagements at your organization or event.
Full CV in PDF. Having immigrated five times across three continents for studies and work, from Iran to Canada, and then onward to the USA, Germany, Switzerland, and the UK, Dr. Amir-Hossein Karimi has accumulated over 15 years of technical experience in both research and industry roles. Now an Assistant Professor at the University of Waterloo and a Vector Institute Faculty Affiliate, he has held prominent research positions at Google DeepMind, Google Brain, alongside software engineering roles at BlackBerry and Meta (Facebook). His academic journey includes a Ph.D. at the Max Planck Institute & ETH Zürich, with a focus on causal inference and explainable AI.
UWaterloo & Vector InstituteSep 2023 - Present
Assistant Professor Electrical and Computer Engineering Department
Google DeepMindMay 2022 - Oct 2022
Research Scientist Intern Mentors: Dr. Lars Buesing, Dr. Jessica Hamrick, David Amos
Google BrainDec 2021 - Apr 2022
Research Scientist Intern Mentors: Dr. Been Kim, Dr. Simon Kornblith
Max Planck Institute & ETH ZürichOct 2018 - Aug 2023
Ph.D. Student in Computer Science Advisors: Prof. Bernhard Schölkopf, Prof. Isabel Valera
University of WaterlooSep 2016 - Aug 2018
M.Math in Computer Science Advisors: Prof. Alexander Wong, Prof. Ali Ghodsi
Meta (Facebook) Inc.Aug 2015 - Sep 2016
Software Engineer Mentors: Dr. Antoine Bordes, Dr. Jason Weston, Kaustubh Karkare
Meta (Facebook) Inc.Feb 2014 - Apr 2014
Software Engineer Intern Mentor: Chris Triolo
BlackBerry Inc.May 2013 - Dec 2013
Software Engineer Intern Mentor: Chris Joe
Stanford UniversityMay 2012 - Aug 2012
Undergraduate Research Assistant Advisor: Prof. Amin Arbabian
University of TorontoSep 2010 - May 2015
B.A.Sc. in Engineering Science – Electrical and Computer Stream Advisors: Prof. Chris Eliasmith, Prof. Richard Zemel
Press, Funding, & Consultations
Dr. Karimi is grateful for the generous funding support from the University of Waterloo, NSERC, Google, and Waterloo.AI, enabling his team to push the boundaries of human-AI research.
In addition to his academic contributions, Dr. Karimi has consulted internationally for several startups, leveraging his expertise for discovery, product-market fit, and scaling of operations, in addition to fundraising and grant writing. He is available to discuss how his research and experience can deliver value to your organization and stakeholders.