Urja Khurana
PhD Student at Vrije Universiteit Amsterdam.
I am a PhD Student at the
Computational Linguistics and Text Mining Lab (CLTL)
at the Vrije Universiteit Amsterdam.
My project is a part of the Hybrid Intelligence Centre
under the Explainability pillar and is on the generalizability of NLP experiments.
I work on understanding what kind of information a language model has captured
to gain insights on how such a model will behave on unseen or out-of-distribution data.
My research interests lie on the intersection of NLP and machine learning;
error analysis, generalization, interpretability, and explainability.
As a use case for my project, I am also working on hate speech detection and am committed to do research ethically.
I am being supervised by Antske Fokkens (VU),
Eric Nalisnick (UvA), and Ivar Vermeulen (VU).
Before my PhD, I completed my BSc and MSc in Artificial Intelligence at the University of Amsterdam where I found my passion for Machine Learning, NLP, and Computer Vision and applying these techniques to improve society positively. I channeled this by being a mentor and TA for second year bachelor AI students and working on several projects where I was able to apply the knowledge I obtained. My bachelor thesis was on the detection of fake news headlines and statements and my master thesis was on explainable and generalized deepfake detection. More details can be found on my CV.
Mar 12, 2024 | Our paper on Crowd-Calibrator has been accepted to the first edition of COLM! This work is a result of my visit to Dill lab. |
---|---|
Mar 12, 2024 | I attented the 1st ELLIS Winter School on Foundation Models! |
Jan 11, 2024 | I co-organized the second edition of the HumanCLAIM workshop, on the human perspective on cross-lingual AI models. |
Sep 20, 2023 | I spent my summer visiting the DILL Lab, working with dr. Swabha Swayamdipta on incorporating human subjectivity for improved calibrated hate speech detection models. Project is still on-going. It was a sunny summer in Los Angeles, University of Southern California! |
Sep 11, 2023 | We participated in the 2023 Dialogue System Technology Challenge, co-located at INLG 2023. We used few-shot data augmentation and waterfall prompting for response generation for task-oriented conversational modelling using subjective knowledge. Our paper can be found here. |
Oct 17, 2022 | We won the 2022 Argument Mining shared task on argument quality prediction using LLMs (GPT-3 to be exact)! Co-located with COLING 2022. We leverage different training paradigms with prompting, our paper can be found here. |
Jul 24, 2022 | I attended the 2022 LisbonxML Summer School! |
May 15, 2022 | I gave two guest lectures on Error Analysis, Bias, and Interpretability for the Natural Language Processing Technology class for first year MSc AI students at the Vrije University Amsterdam. Slides will be uploaded soon! |
May 11, 2022 | Delighted to have been part of the Confidently Wrong: Exploring the Calibration and Expression of (Un) Certainty of Large Language Models in a Multilingual Setting paper. Accepted at the Workshop on Multimodal, Multilingual Natural Language Generation, co-located with INLG 2023. |
May 11, 2022 | Our paper called Hate Speech Criteria: A Modular Approach to Task-Specific Hate Speech Definitions was accepted at WOAH 2022, co-located with NAACL 2022! |
Nov 11, 2021 | Presented my paper called “How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task” at the Eval4NLP Workshop, co-located with EMNLP 2021! |
Dec 1, 2020 | I started my PhD at the VU! |