Urja Khurana

I am a Postdoc at TU Delft working with Pradeep Murukkanaiah (TU Delft) and Antske Fokkens (Vrije Universiteit Amsterdam). Before, I was a PhD student at the Computational Linguistics and Text Mining Lab (CLTL) at the Vrije Universiteit Amsterdam, advised by Antske Fokkens (Vrije Universiteit Amsterdam) and Eric Nalisnick (Johns Hopkins University).
I am passionate about understanding what knowledge a language model has captured and whether it can reliably apply this knowledge across diverse (unseen) contexts and perspectives; what will its impact be in the real world? I am particularly interested in analyzing and developing tools to ensure responsible deployment, with a big focus on robustness for safety-critical applications, e.g. hate speech detection.
My work includes evaluating the generalization of model capabilities to unseen data, for instance through the lens of model averaging and stability/consistency. I have also proposed a method to calibrate language models according to human subjectivity. Collaborating in an interdisciplinary environment with a social scientist and law expert, I characterized the subjective aspects of hate speech and its impact on real-world deployment. Building on these insights, I developed a framework to evaluate if a hate speech detection model’s behavior aligns with the type of hate speech it is intended to address.
I earned my BSc and MSc degrees in Artificial Intelligence from the University of Amsterdam.
If you want to (briefly) know more about my research interests, see the research page.
Note: if you have recently reached out to me at my VU e-mail and did not get a reply, this is highly likely due to my account being closed. The e-mails do not bounce back. Please reach out to me again at my TU e-mail address.
selected publications.
- Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?In First Conference on Language Modeling 2024