Theses

Deconfound Models using Fewer Explanations

TL;DR: explanatory interactive learning leverages expensive human feedback, can we do away with less or cheaper feedback?

Argue with Neuro-Symbolic Models

TL;DR: bugs are multilayered, so we need multilayered interaction – namely, interactive argumentation – to fix them all.

Reverse Skeptical Learning

TL;DR: help users to remember past model mistakes, this might save them from trusting models too much.

Learning Human-interpretable Representations

  • Evaluating and extending learning by self explaining. iml xai

  • Extending explainable interactive learning with reinforcement learning with human feedback. iml xai cbms

  • Explainable interactive learning with arguments. iml xai nesy

  • Reasoning shortcuts in large language models. nesy

Other Student Projects

Coming soon!