Hello! I am a CS Ph.D. student at University of California, Irvine (UCI), supervised by Padhraic Smyth and Sameer Singh. my research focuses on the responsible use of generative-based language models (LM) for natural language tasks.
Prior to joining UCI, I worked as a research data scientist at the Responsible AI (FATE) group at Feedzai, under the direction of Pedro Saleiro. There, my research focused mostly fairness, explainability, and reproducibility aspects of Machine Learning (ML) pipelines in the context of credit fraud.
I am currently interested in Responsible AI challenges both for tabular and textual datasets, with a high emphasis on the practical implications for the different personas involved in-the-loop. I am also broadly interested in multi-objective optimization and model analysis/evaluation. My work is supported by a CS Department Excellence Fellowship and a Fulbright fellowship.
Tabular data is prevalent in many high-stakes domains, such as financial services or public policy. Gradient Boosted Decision Trees (GBDT) are popular in these settings due to their scalability, performance, and low training cost. While fairness in these domains is a foremost concern, existing in-processing Fair ML methods are either incompatible with GBDT, or incur in significant performance losses while taking considerably longer to train. We present FairGBM, a dual ascent learning framework for training GBDT under fairness constraints, with little to no impact on predictive performance when compared to unconstrained GBDT. Since observational fairness metrics are non-differentiable, we propose smooth convex error rate proxies for common fairness criteria, enabling gradient-based optimization using a “proxy-Lagrangian” formulation. Our implementation shows an order of magnitude speedup in training time relative to related work, a pivotal aspect to foster the widespread adoption of FairGBM by real-world practitioners.
@article{2023fairgbm,selected={true},bibtex_show={true},author={Cruz, Andr\'{e} and Bel{\'{e}}m, Catarina and Bravo, Joao and Saleiro, Pedro and Bizarro, Pedro},title={FairGBM: Gradient Boosting with Fairness Constraints },journal={ICLR 2023},year={2023}}
Weakly Supervised Multi-task Learning for Concept-based Explainability
In ML-aided decision-making tasks, such as fraud detection or medical diagnosis, the human-in-the-loop, usually a domain-expert without technical ML knowledge, prefers high-level concept-based explanations instead of low-level explanations based on model features. To obtain faithful concept-based explanations, we leverage multi-task learning to train a neural network that jointly learns to predict a decision task based on the predictions of a precedent explainability task (i.e., multi-label concepts). There are two main challenges to overcome: concept label scarcity and the joint learning. To address both, we propose to: i) use expert rules to generate a large dataset of noisy concept labels, and ii) apply two distinct multi-task learning strategies combining noisy and golden labels. We compare these strategies with a fully supervised approach in a real-world fraud detection application with few golden labels available for the explainability task. With improvements of 9.26% and of 417.8% at the explainability and decision tasks, respectively, our results show it is possible to improve performance at both tasks by combining labels of heterogeneous quality.
@article{2021weasul,selected={true},bibtex_show={true},author={Bel{\'{e}}m, Catarina and Balayan, Vladimir and Saleiro, Pedro and Bizarro, Pedro},title={Weakly Supervised Multi-task Learning for Concept-based Explainability},journal={ICLR 2021 - WeaSuL Workshop},year={2021}}
Promoting Fairness through Hyperparameter Optimization
Considerable research effort has been guided towards algorithmic fairness but real-world adoption of bias reduction techniques is still scarce. Existing methods are either metric- or model-specific, require access to sensitive attributes at inference time, or carry high development and deployment costs. This work explores, in the context of a real-world fraud detection application, the unfairness that emerges from traditional ML model development, and how to mitigate it with a simple and easily deployed intervention: fairness-aware hyperparameter optimization (HO). We propose and evaluate fairness-aware variants of three popular HO algorithms: Fair Random Search, Fair TPE, and Fairband. Our method enables practitioners to adapt pre-existing business operations to accommodate fairness objectives in a frictionless way and with controllable fairness-accuracy trade-offs. Additionally, it can be coupled with existing bias reduction techniques to tune their hyperparameters. We validate our approach on a real-world bank account opening fraud use case, as well as on three datasets from the fairness literature. Results show that, without extra training cost, it is feasible to find models with 111% average fairness increase and just 6% decrease in predictive accuracy, when compared to standard fairness-blind HO.
@article{2O21fairband,selected={true},bibtex_show={true},author={Cruz, Andr{\'{e}} F. and Saleiro, Pedro and Bel{\'{e}}m, Catarina and Soares, Carlos and Bizarro, Pedro},title={Promoting Fairness through Hyperparameter Optimization},journal={CoRR},volume={abs/2103.12715},year={2021},url={https://arxiv.org/abs/2103.12715},eprinttype={arXiv},eprint={2103.12715},timestamp={Thu, 14 Oct 2021 09:16:02 +0200},biburl={https://dblp.org/rec/journals/corr/abs-2103-12715.bib},bibsource={dblp computer science bibliography, https://dblp.org}}
FAccT
How Can I Choose an Explainer? An Application-Grounded Evaluation of Post-Hoc Explanations
There have been several research works proposing new Explainable AI (XAI) methods designed to generate model explanations having specific properties, or desiderata, such as fidelity, robustness, or human-interpretability. However, explanations are seldom evaluated based on their true practical impact on decision-making tasks. Without that assessment, explanations might be chosen that, in fact, hurt the overall performance of the combined system of ML model + end-users. This study aims to bridge this gap by proposing XAI Test, an application-grounded evaluation methodology tailored to isolate the impact of providing the end-user with different levels of information. We conducted an experiment following XAI Test to evaluate three popular XAI methods - LIME, SHAP, and TreeInterpreter - on a real-world fraud detection task, with real data, a deployed ML model, and fraud analysts. During the experiment, we gradually increased the information provided to the fraud analysts in three stages: Data Only, i.e., just transaction data without access to model score nor explanations, Data + ML Model Score, and Data + ML Model Score + Explanations. Using strong statistical analysis, we show that, in general, these popular explainers have a worse impact than desired. Some of the conclusion highlights include: i) showing Data Only results in the highest decision accuracy and the slowest decision time among all variants tested, ii) all the explainers improve accuracy over the Data + ML Model Score variant but still result in lower accuracy when compared with Data Only; iii) LIME was the least preferred by users, probably due to its substantially lower variability of explanations from case to case.
@inproceedings{10.1145/3442188.3445941,selected={true},bibtex_show={true},abbr={FAccT},author={Jesus, S\'{e}rgio and Bel\'{e}m, Catarina and Balayan, Vladimir and Bento, Jo\~{a}o and Saleiro, Pedro and Bizarro, Pedro and Gama, Jo\~{a}o},title={How Can I Choose an Explainer? An Application-Grounded Evaluation of Post-Hoc Explanations},year={2021},isbn={9781450383097},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3442188.3445941},doi={10.1145/3442188.3445941},booktitle={Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency},pages={805–815},numpages={11},keywords={XAI, Evaluation, Explainability, LIME, SHAP, User Study},location={Virtual Event, Canada},series={FAccT '21}}