Experiences
24'.5-8 : Research Intern @
23'.5-8 : Research Intern @
22'.5-8 : Research Intern @
21'-25' : Founding Engineer @

Dong-Ho Lee (이동호)

I completed my PhD in computer science at USC and USC/ISI, where I was fortunate to be advised by Jay Pujara and to collaborate closely with Xiang Ren. My thesis committee included Xiang Ren, Robin Jia, Fred Morstatter, and Meisam Razaviyayn.

During my PhD, I interned at & (2024, with Adam Kraft, Long Jin, and Xinyang Yi), (2023, with Francesco Barbieri), and (2022, with Sujay Jauhar). I was also a founding AI engineer at , working closely with Sungjoon Park and Jihyung Moon. I have served as an area chair and reviewer for conferences in NLP (ARR, ACL, EMNLP, NAACL, EACL, COLM, LREC), ML (ICML, ICLR, NeurIPS), and IR (KDD, WWW, SDM).

Research

My PhD thesis, Improving Language Models Through Context (slides), explores how language models can strategically leverage various forms of context to enhance their reasoning and adaptability. This includes: (1) explanations [ACL 2020, ACL 2020 Demo, EACL 2023, ACL 2023 Demo], (2) in-context examples [ACL 2022, EMNLP 2023, EMNLP 2023], and (3) dialogue context [EMNLP 2023]. I demonstrated that systematically integrating these forms of context leads to significant improvements in inference, training efficiency, and self-refinement.

As LLMs increasingly serve as interactive agents that must better understand users, context has become fundamental to how they interpret intent, adapt to goals, and respond appropriately in real-world scenarios.

Building on this work, I’m particularly interested in developing personalized and socially intelligent LLMs that leverage contextual signals (e.g., long-term user behavior, interaction history, and user intent) to deliver adaptive, goal-aligned, and socially appropriate responses in real-world applications.

My research focuses on contextual personalization, outcome-based reinforcement learning, and dynamic model behaviors that optimize user interaction and social outcomes.

News

  • [2025-05-15] I will serve as an Area Chair at ARR May 2025 (EMNLP 2025).
  • [2025-04-28] Successfully defended my PhD dissertation!
  • [2025-02-18] Released REALTALK, an AI agent memory benchmark based on long-term, real-world conversations.
  • [2025-02-15] I will serve as an Area Chair at ARR February 2025 (ACL 2025).
  • [2024-10-21] Released STAR, a training-free LLM-based recommender system achieving +20% improvement over the supervised TIGER baseline.
  • [2024-05-27] Started internship at Google DeepMind and Google Play.
  • [2024-02-15] I passed my qualifying exam and officially became a PhD Candidate.
  • [2023-12-07] I will serve as an Area Chair at ARR December 2023 (NAACL 2024).
  • [2023-10-30] I will serve as an Area Chair at ARR October 2023 (EACL 2024).
  • [2023-10-07] Three first-authored papers (Paper 1, Paper 2, Paper 3) have been accepted to EMNLP 2023!
  • [2023-05-08] XMD, Explanation-based model debugging framework, has been accepted to ACL 2023 Demo!
  • [2023-01-21] One first-authored paper (AutoTriggER) has been accepted to EACL 2023!
  • [2022-08-24] Invited talks on Explanation based Learning at POSTECH.
  • [2022-02-24] Two first-authored papers (Paper 1, Paper 2) have been accepted to ACL 2022!

Talks

  • Explanation-based Learning [link], 2022, Invited Talks @ POSTECH
  • Explanation-based Learning [link], 2022, Invited Talks @ Microsoft