About me
I am an Applied Mathematics Ph.D. candidate at Harvard University SEAS working with Prof. Flavio Calmon and a student researcher at Google DeepMind in the Gemini Safety Team. I use theoretical insights to develop safe and trustworthy AI and ML systems. My research is driven by the belief that AI and ML systems should not only be accurate and efficient but also transparent, fair, and aligned with human values and societal norms.
I firmly believe that theoretically guided methods can significantly outperform heuristics when designing safer AI systems. For this reason, my research focuses on answering questions such as: (i) “What is the optimal performance of a given method for designing safer AI?”, (ii) “How can we achieve this optimal performance?”, and (iii) “Can we relax the problem to achieve better performance beyond what is believed to be optimal?” My research is supported by the 2024 Apple Scholars in AI/ML Fellowship.
Previously, I have also interned at IBM Research in the IBM T.J. Watson Research Center. Before joining Harvard, I earned an M.S. in Computational Mathematics and Modelling from Instituto de Matemática Pura e Aplicada (IMPA), a beautiful mathematics institute in the Tijuca National Park in Rio de Janeiro, Brazil. You can find my CV here.
Recent papers
April 2023 - Our paper was accepted at FAccT!
In this multidisciplinary paper, we show the prevalence of arbitrary decisions in LLMs trained for content moderation and that these arbitrary decisions disproportionally affect underrepresented communities. Then, we discuss the implications of this finding on (i) freedom of speech, (ii) procedural fairness, and (iii) discrimination.
April 2023 - Our paper was published at the IEEE Journal on Selected Areas in Information Theory.
We introduce CVaR fairness, a metric that allows ML practitioners to detect performance disparities across a large number of demographic groups (e.g., all combinations of race, sex, and nationality) with theoretical guarantees.
April 2023 - Our paper was accepted at ISIT 2023.
Rashomon effect is the phenomenon where different models achieve the similar permormance but provide different predictions for certain input points. We show that the Rashomon effect is inevitable and provide a method for practitioners to select the Rashomon parameter as a function of the dataset size.
January 2023 - Our paper received the Innovative Applications of AI award from AAAI.
In this paper, we developed an ML solution that combines deep learning and conformal prediction to output fast and accurate volume estimates and segmentation masks from fetal MRI. The proposed solution in the paper was deployed by the biggest clinical diagnosis company in Latin America.
August 2022 - Our paper was accepted at NeurIPS 2022.
This paper aims to understand the conditions under which one can detect fair use violations in predictive models and, more interestingly, the conditions where estimating fair use is impossible.
Recent announcements
March 2024 - I am happy to announce that I am joining Google DeepMind as a student researcher!
March 2024 - I am thrilled to announce that I was selected as an Apple Scholar!
May 2023 - I am happy to announce that I am joining IBM Research for the summer.
August 2023 - I received the ISIT Student Travel Grant.
August 2022 - I received the NeurIPS scholar award.
July 2022 - I received the Fundação Estudar Leadership Fellowship.
The Fellowship aims to support, bring together, and develop Brazil’s most promising young leaders that can generate positive transformations in their sector of activity. I was one of the 30 fellows selected out of 33k (0.08% selected).