Amir-Hossein Karimi

Bio:

I am an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Waterloo, and a Vector Institute Faculty Affiliate.

My lab's mission is to advance the state of the art in artificial intelligence and chart the path for trustworthy human-AI symbiosis. In particular, I am interested in the development of systems that can recover from or amend poor experiences caused by AI decisions, assay the safety, factuality, and ethics of AI systems to foster trust in AI, and effectively combine human and machine abilities. In this regard, my research explores the intriguing intersection of causal inference, explainable AI, and program synthesis.

I've had the privilege to showcase my work at esteemed AI and ML-related platforms like NeurIPS, ICML, AAAI, AISTATS, ACM-FAccT, and ACM-AIES. My contributions to the field of algorithmic recourse have been acknowledged at these platforms through spotlight and oral presentations, as well as through a book chapter and a highly regarded survey paper in the ACM Computing Surveys. 

Before joining the University of Waterloo, I gained extensive industry experience at Meta, Google Brain, and DeepMind and offered AI consulting services worth over $250,000 to numerous startups and incubators. My academic and non-academic endeavors have been honored with awards like the Spirit of Engineering Science Award (UofToronto, 2015), the Alumni Gold Medal Award (UWaterloo, 2018), the NSERC Canada Graduate Scholarship (2018), and the Google PhD fellowship (2021).

I am a firm believer in the strength of diverse perspectives and am dedicated to building inclusive spaces in academia and beyond. As a professor, I strive to inspire the next generation of AI researchers and engineers, helping them understand and responsibly apply the power of AI. Further emphasizing my commitment to inclusive learning, I co-founded "PrinceOfAI," an initiative that provides free education on basic and advanced AI topics to a community of over 25,000 individuals. This initiative aims to reduce barriers to education and offer opportunities to those who traditionally lack access. With a wealth of content equivalent to an introductory Machine Learning course, our platform facilitates active learning through technical posts, engaging video reels, and interactive webinars. Future plans include synchronous workshops, one-to-one mentoring, and a regional Machine Learning summer school, with the overarching goal of identifying and supporting high-potential students in need.

Joining our team:

I am always on the lookout for exceptional and highly motivated students/visitors across all levels (Bachelor's, Master's, PhD, Post-doctoral). If you're passionate about building the future of human-AI symbiosis, please fill out this form.

News:

Research Focus:

In this AI era, there's an urgent need to advance responsible, ethical AI systems in response to global needs. We must work to gain trust in AI, especially in algorithmic recourse and creating reliable human-machine collaborations. Such tasks form the core of ongoing AI research challenges, primarily harmonizing technological progress with societal approval. Our lab's research aims to contribute to ethical AI research and addresses key issues through several goals:

1. Develop methods for algorithmic recourse to recover from or overturn unfavourable AI-driven decisions. This involves managing data uncertainties, addressing diverse stakeholder needs, and ensuring real-world recourse applications.

2. Employ causal-based methods to build safe, truthful, and ethical AI systems, aiming to boost transparency and trust in AI decisions and explanations thereof. This strategy will expose biases, encourage ethical practices, and earn public AI trust.

3. Develop hybrid human-AI systems for industries such as healthcare, science, and education, to foster trustworthy, responsible, and human-centric AI solutions.

We plan an integrated approach in this research, blending insights from causal inference, explainable AI, and program synthesis. The goal is to address complex AI challenges by offering robust solutions for uncertainties and diverse stakeholder needs. We'll use explainable AI for transparency, program synthesis for expanding user interaction with automated systems, and causal inference for managing uncertainties. This integrated approach will enhance our focus on human-centric AI, crucial for widespread sector adoption.

Nasir-ol-molk Mosque, Shiraz, 2019

Alumni Gold Medal Award, UWaterloo, 2018

Paris, 2019