Doyoung Kim (κΉ€λ„μ˜)

Masters Degree at AI
Kim Jaechul Graduate School of AI
KAIST
Seoul Campus Building 9209

doyoung@lklab.io
doyoung9803@gmail.com (permanent)
[CV] [Google Scholar] [GitHub] [X] [LinkedIn]

Doyoung Kim, 2020

Hi, I am an MS student studying LLM. I am a member of Language & Knowledge Lab at KAIST AI, advised by Minjoon Seo. Before studying LLM, I completed my BS in Mathematics & Computer Science(double major) at KAIST.
My primary research objective revolves around developing an effective algorithm for AI to comprehend the world model. I hypothesize that this algorithm might mirror the cognitive processes through which humans perceive, learn, and navigate their environment. My aim is to identify a processing and learning algorithm that allows AI to implicitly construct a robust world model, leveraging it internally to effectively explore its surroundings and uncover novel insights not explicitly provided during training. As a starting point for the goal, I am currently interested in two main areas below:
(1) Exploring a better alternative to current NTP supervision for Language Modeling
(2) Making LLM a general planner
I am currently finding for an open PhD position starting from fall 2025. I am also finding for a visiting research/internship position starting from fall 2024/spring 2025.
Feel free to contact me via email if you have any questions!


Latest News


Publications

Please see my Semantic Scholar or Google Scholar profiles for the full list.

* denotes equal contribution.

Preprint

  • Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
    Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, Minjoon Seo
    NAACL 2024

    [paper]
  • Beyond Next Token Prediction: semiparametric token sequence cosupervision
    Hyunji Lee*, Doyoung Kim*, Jihoon Jun, Sejune Joo, Joel Jang, Kyoung-Woon On, Minjoon Seo

    [paper]

2024

  • How Well Do Large Language Models Truly Ground?
    Hyunji Lee, Se June Joo, Chaeeun Kim, Joel Jang, Doyoung Kim, Kyoung-Woon On, Minjoon Seo
    NAACL 2024

    [paper]
  • FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets spotlight
    Seonghyeon Ye*, Doyoung Kim*, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, Minjoon Seo
    ICLR 2024

    [paper]

2023

  • The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
    Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo
    EMNLP 2023
    [paper]

  • Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
    Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo
    EMNLP 2023 Findings
    [paper]

  • Exploring the Benefits of Training Expert Language Models over Instruction Tuning
    Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo
    ICML 2023
    [paper]

  • Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
    Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo
    ICLR 2023
    [paper]

Projects

* denotes equal contribution.

2023

  • Selfee: Iterative self-revising llm empowered by self-feedback generation
    Seonghyeon Ye*, Yongrae Jo*, Doyoung Kim*, Sungdong Kim, Hyeonbin Hwang, Minjoon Seo
    [blog]