About me

Hi! I am Lucas Monteiro Paes, an AI researcher and mathematician, not Lucas Paes or Lucas Monteiro. Both Lucas are good football players; I am just an okay one. : )

I am an Applied Mathematics Ph.D. candidate at Harvard University working with Prof. Flavio Calmon. Previously, I was a Student Researcher at Google DeepMind in the Gemini Safety Team and an AI Research Scientist Intern at IBM Research in the IBM T.J. Watson Research Center.

I use theoretical insights to develop safe and trustworthy AI and ML systems. My research is driven by the belief that AI and ML systems should not only be accurate and efficient but also transparent, fair, and aligned with human values and societal norms. My research is supported by the 2024 Apple Scholars in AI/ML Fellowship.

Before joining Harvard, I earned an M.S. in Computational Mathematics and Modeling from Instituto de Matemática Pura e Aplicada (IMPA), a beautiful mathematics institute in the Tijuca National Park in Rio de Janeiro, Brazil. You can find my CV here.

Recent papers

September 2024 - Our paper Selective Explanations was accepted at NeurIPS!
We introduce Selective Explanations, a method to generate fast and accurate explanations for the predictions of large models. Selective Explanations was developped with an eye towards the explanations of generative language model like the one we proposed in MExGen.

September 2024 - Our paper Multi-Group Proportional Representation in Retrieval was accepted at NeurIPS!
We introduce Multi-Group Proportional Representation (MPR), a metric to measure intersectional representation biases. We also develop an optimized method to perform image retrieval while optimizing MPR.

August 2024 - Our policy brief will be presented at the G20 Meeting AI Technologies: Algorithmic Monoculture, Arbitrariness, and Global Divides!
We discuss the impact of arbitrary predictions when a handful of models are used by the vast majority of the population, E.g., content moderation models of social networks and LLMs.

May 2024 - Our paper Multi-Group Fairness Evaluation via Conditional Value-at-Risk Testing was published at the IEEE Journal on Selected Areas in Information Theory.
We introduce CVaR fairness, a metric that allows ML practitioners to detect performance disparities across a large number of demographic groups (e.g., all combinations of race, sex, and nationality) with theoretical guarantees.

March 2024 - Our paper Algorithmic Arbitrariness in Content Moderation was accepted at FAccT!
In this multidisciplinary paper, we show the prevalence of arbitrary decisions in LLMs trained for content moderation and that these arbitrary decisions disproportionally affect underrepresented communities. Then, we discuss the implications of this finding on (i) freedom of speech, (ii) procedural fairness, and (iii) discrimination.

April 2023 - Our paper On the Inevitability of the Rashomon Effect was accepted at ISIT 2023.
Rashomon effect is the phenomenon where different models achieve the similar permormance but provide different predictions for certain input points. We show that the Rashomon effect is inevitable and provide a method for practitioners to select the Rashomon parameter as a function of the dataset size.

January 2023 - Our paper AmnioML: Amniotic Fluid Segmentation and Volume Prediction With Uncertainty Quantification received the Innovative Applications of AI award from AAAI.
In this paper, we developed an ML solution that combines deep learning and conformal prediction to output fast and accurate volume estimates and segmentation masks from fetal MRI. The proposed solution in the paper was deployed by the biggest clinical diagnosis company in Latin America.

August 2022 - Our paper On the Epistemic Limits of Personalized Prediction was accepted at NeurIPS 2022.
This paper aims to understand the conditions under which one can detect fair use violations in predictive models and, more interestingly, the conditions where estimating fair use is impossible.

Recent announcements

March 2024 - I am happy to announce that I am joining Google DeepMind as a student researcher!

March 2024 - I am thrilled to announce that I was selected as an Apple Scholar!

May 2023 - I am happy to announce that I am joining IBM Research for the summer.

August 2023 - I received the ISIT Student Travel Grant.

August 2022 - I received the NeurIPS scholar award.

July 2022 - I received the Fundação Estudar Leadership Fellowship.
The Fellowship aims to support, bring together, and develop Brazil’s most promising young leaders that can generate positive transformations in their sector of activity. I was one of the 30 fellows selected out of 33k (0.08% selected).