Welcome to my webpage

Hello! I am Adithya Bhaskar, a second-year Ph.D. student at Princeton University, advised by Prof. Danqi Chen. Prior to joining Princeton, I completed my B.Tech. in Computer Science from IIT Bombay, where I completed my Bachelor’s Thesis under the supervision of Prof. Sunita Sarawagi. I am fortunate to have previously interned under Prof. Greg Durrett at UT Austin, where I was first exposed to Large Language Models. Prior to that, I interned at Uppsala University under Prof. Parosh Abdulla.

Research Interests

Two broad directions interest me the most, stemming from two distinct points of view I harbor.

First, I am curious about the inner workings of (Large) Language Models. We can throw together a nice looking loss function, a reasonable training loop, some compute and lots of data - and voila! A model starts generating near-fluent text. But, what does it learn? Does it reverse-engineer rules of grammar? In this context, I am interested in two counterposing approaches:

  • How can we best port human knowledge of Natural Language (e.g. linguistic structure, disambiguation of context, and so on) to a Language Model by modifying the model, training process and/or the data? More practically, can this lead us to better parameter and data efficiency?
  • Humans find it hard to learn languages without any visual cues or explanations, but it is easy (for a generous definition of easy) for LMs to do so. Do they know something we don’t? Can we reverse engineer more efficient ways to think about Language from them? This is a more abstract question that nonetheless excites me as much as the previous one.

Second, as LMs become more commonplace, their potential for both benefit and harm is bound to increase. We want them to be helpful, factual and relevant, among other desiderata. I am interested in exploring how we can best steer the models towards the behavior we want, and away from undesirable and harmful behavior (e.g. hallucinations).

More generally, NLP research is fascinating in its own right. Many of the current challenges (think ChatGPT hallucinations, lack of logical reasoning, and so on) are daunting, but by the same coin quite thrilling. I believe that going forward, principled approaches that generalize well are likely to be the ones that power through them.

I am excited to see what comes next.

Updates

  • [06/24] I will present my Heuristic Core paper at ACL (Oral, Main). See you there!
  • [04/24] Gave an invited talk at Amazon AWS (Responsible AI team).
  • [04/24] Named a Hisashi and Masae Kobayashi *67 Fellow.
  • [08/23] Joined Princeton University!
  • [08/23] Graduated from IIT Bombay with Honors.
  • [08/23] Happy to be awarded the Thomas A. Dooie Class of 1974 Research Award for my Bachelor’s Thesis!
  • [05/22] Excited to intern with Prof. Greg Durrett at UT Austin over the summer!