In authors or contributors

4 resources

  • Juan D. Pinto, Luc Paquette
    |
    May 22nd, 2024
    |
    preprint
    Juan D. Pinto, Luc Paquette
    May 22nd, 2024

    The challenge of creating interpretable models has been taken up by two main research communities: ML researchers primarily focused on lower-level explainability methods that suit the needs of engineers, and HCI researchers who have more heavily emphasized user-centered approaches often based on participatory design methods. This paper reviews how these communities have evaluated interpretability, identifying overlaps and semantic misalignments. We propose moving towards a unified framework...

  • Juan D. Pinto, Luc Paquette
    |
    May 22nd, 2024
    |
    preprint
    Juan D. Pinto, Luc Paquette
    May 22nd, 2024

    The challenge of creating interpretable models has been taken up by two main research communities: ML researchers primarily focused on lower-level explainability methods that suit the needs of engineers, and HCI researchers who have more heavily emphasized user-centered approaches often based on participatory design methods. This paper reviews how these communities have evaluated interpretability, identifying overlaps and semantic misalignments. We propose moving towards a unified framework...

  • Juan D. Pinto, Luc Paquette
    |
    May 22nd, 2024
    |
    preprint
    Juan D. Pinto, Luc Paquette
    May 22nd, 2024

    The challenge of creating interpretable models has been taken up by two main research communities: ML researchers primarily focused on lower-level explainability methods that suit the needs of engineers, and HCI researchers who have more heavily emphasized user-centered approaches often based on participatory design methods. This paper reviews how these communities have evaluated interpretability, identifying overlaps and semantic misalignments. We propose moving towards a unified framework...

  • Yiqiu Zhou, Maciej Pankiewicz, Luc Paque...
    |
    journalArticle
    Yiqiu Zhou, Maciej Pankiewicz, Luc Paque...

    This study examines how Large Language Model (LLM) feedback generated for compiler errors impacts learners’ persistence in programming tasks within a system for automated assessment of programming assignments. Persistence, the ability to maintain effort in the face of challenges, is crucial for academic success but can sometimes lead to unproductive "wheel spinning" when students struggle without progress. We investigated how additional LLM feedback based on the GPT-4 model, provided for...

Last update from database: 15/12/2025, 22:15 (UTC)
Powered by Zotero and Kerko.