Arushi Jain
I am final year PhD in Computer Science (AI) with Prof Doina Precup at McGill University and Mila Lab, Montreal.
Recently, I interned at Microsoft Research (MSR) in Amsterdam with Elise Van Der Pol, where I worked on Molecular Drug Discovery. Prior to that,
I was interning at Meta AI Lab (FAIR) in Paris with Alessandro Lazaric, where I worked on self-supervised framework for integrating exploration and representation in a reward-free zero-shot setting.
Research Summary: My research focuses on reinforcement learning (RL) with an emphasis on improving the reliability, sample efficiency , and safety of RL systems for real-world applications. I am also interested in the personalization potential of LLMs, aiming to develop more robust AI frameworks.
Email /
CV /
Google Scholar /
Twitter /
Github
|
|
Timeline
Fall 2024
Microsoft Research, Amsterdam
AI4Science Research Intern
[Host: Elise Van Der Pol]
Fall 2022
Meta AI Lab, Paris
Research Scientist Intern
[Host: Alessandro Lazaric]
Summer 2022
Amazon
AI Research Intern
2019 - Now
McGill University
Ph.D. in Computer Science
Fall 2019
SportLogiQ, Montreal
RL Research Intern
[Host: Norm Ferns]
Summer 2018
Borealis AI, Edmonton
Research Scientist Intern
[Host: Nidhi Hedge]
2017 - 2019
McGill University
Masters in Computer Science
2016 - 2017
Microsoft Research India
ML Research Fellow
2012 - 2016
IIIT-Delhi
Bachelor’s Degree in Computer Science
|
|
GVFExplorer: Adaptive Exploration for Data-Efficient General Value Function Evaluations
Arushi Jain, Josiah P. Hanna, Doina Precup
NeurIPS, 2024
Paper
|
|
Towards Painless Policy Optimization for Constrained MDPs
Arushi Jain, Sharan Vaswani, Reza Babanezhad, Csaba Szepesvari, Doina Precup
UAI and RLDM, 2022
paper
/
short RLDM paper
/
code
/
RLDM poster
|
|
Variance Penalized On-Policy and Off-Policy Actor-Critic
Arushi Jain, Gandharv Patil, Ayush Jain, Khimya Khetarpal, Doina Precup
AAAI, 2021
paper
/
code
/
talk
/
slides
/
poster
|
|
Safe Option-Critic: Learning Safety in the Option-Critic Architecture
Arushi Jain*, Khimya Khetarpal*, Doina Precup
Knowledge Engineering Review (KER) Journal, 2021. (Cambridge University Press Journal)
Adaptive Learning Agents (ALA) Workshop, ICML, 2018.
paper
/
code
/
slides
/
poster
|
|
Safety using Constraint Variance in Policy-Gradient Methods
Arushi Jain
Master's thesis, McGill University, March 2020.
paper
|
|
Safe Actor-Critic
Arushi Jain*, Ayush Jain, Doina Precup
Safety, Risk and Uncertainty in RL Workshop, UAI, 2018.
Women in ML (WiML) Workshop, NeurIPS , 2018.
paper
/ /
slides
/
poster
|
|
Safe Hierarchical Policy Optimization using Constrained Return Variance in Options
Arushi Jain*, Doina Precup
RLDM, 2019.
paper
|
|
Learning Options using Constrained Return Variance
Arushi Jain*, Doina Precup
Safety and Robustness in Decision Making Workshop , NeurIPS, 2019.
paper
/
poster
|
|
Safe Policy Learning with Constrained Return Variance
Arushi Jain*
Graduate Student AI Symposium, Canadian AI Conference, 2019.
Proceeding published in LNAI Series by Springer Verlag.
paper
/
talk
|
|