About

“Experience feeds us, opportunity grows us, relationships inspire us, and the world shapes us.”

I am a research scientist in Artificial Intelligence, focused on pushing the limits of efficiency—from efficiency-aware pre-training schemes and algorithmic sparsity to state-of-the-art model compression—so that AI can deliver measurable, positive impact for society. Guided by rigorous empirical methodology and an appetite for interdisciplinary collaboration, I aim to translate cutting-edge research into deployable systems that make the best use of every flops, byte, and human interaction.

In a life lived only once, I am committed to building a career I will look back on without regret—one defined by curiosity, integrity, and lasting contribution.


Work Experience

NAVER Cloud : Dec.2021 - Present / Foundation Research, Technical Lead (Research Scientist)

  • HyperCLOVA X modeling & pre-training (Technical Report: 2024, 2025)
  • Pre-training efficiency & Transformers architecture
  • Multi-linguality and multi-modality of large-scale Transformers
  • Model compression & quantization
  • Alongside my full-time role at NAVER, I am pursuing a Ph.D. in AI at Korea Advanced Institute of Science and Technology (KAIST) under Prof. Jinwoo Shin, supported by a NAVER Cloud grant.

LG Energy Solution : Nov.2020 - Nov.2021 / Business Strategy, Data Scientist

Nepes : Jan.2018 - Oct.2020 / Future Intelligence, Research Scientist

  • Alternative military service program.

Recent Publications

*equal contribution

Lead author

  1. Exploiting Vocabulary Frequency Imbalance in Language Model Pre-training.
    Woojin Chung*, Jeonghoon Kim *.
    NeurIPS 2025.
  2. Peri-LN: Revisiting Normalization Layer in the Transformer Architecture.
    Jeonghoon Kim, Byeongchan Lee, Cheonbok Park, Yeontaek Oh, Beomjun Kim, Taehwan Yoo, Seongjin Shin, Dongyoon Han, Jinwoo Shin, Kang Min Yoo.
    ICML 2025.
  3. LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices.
    Jung Hyun Lee*, Jeonghoon Kim *, June Yong Yang, Se Jung Kwon, Eunho Yang, Dongsoo Lee.
    NAACL 2025.
  4. Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language Models.
    Cheonbok Park*, Jeonghoon Kim *, Joosung Lee, Sanghwan Bae, Jaegul Choo, Kang Min Yoo.
    Arxiv (under review, 2025).
  5. Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models.
    Jung Hwan Heo*, Jeonghoon Kim *, Beomseok Kwon, Byeongwook Kim, Se Jung Kwon, Dongsoo Lee.
    ICLR2024.
  6. Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization.
    Jeonghoon Kim * , Jung Hyun Lee*, Sungdong Kim, Joonsuk Park, Kang Min Yoo, Se Jung Kwon, Dongsoo Lee.
    NeurIPS 2023.
  7. FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization.
    Jung Hyun Lee*, Jeonghoon Kim *, Se Jung Kwon, Dongsoo Lee.
    ICML 2023.

Co-author

  1. HyperCLOVA X THINK Technical Report
    HyperCLOVA X Team.
    Technical report (Arxiv, 2025).
  2. ReGUIDE: Data Efficient GUI Grounding via Spatial Reasoning and Search.
    Hyunseok Lee, Jeonghoon Kim, Beomjun Kim, Jihoon Tack, Chansong Jo, Jaehong Lee, Cheonbok Park, Sookyo In, Jinwoo Shin, Kang Min Yoo.
    Arxiv (under review, 2025).
  3. LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models.
    Gunho Park, Baeseong Park, Minsub Kim, Sungjae Lee, Jeonghoon Kim, Beomseok Kwon, Se Jung Kwon, Byeongwook Kim, Youngjoo Lee, Dongsoo Lee.
    ICLR2024.
  4. HyperCLOVA X Technical Report
    HyperCLOVA X Team.
    Technical report (Arxiv, 2024).
  5. Winning Both the Accuracy of Floating Point Activation and the Simplicity of Integer Arithmetic.
    Yulhwa Kim, Jaeyong Jang, Jehun Lee, Jihoon Park, Jeonghoon Kim, Byeongwook Kim, Baeseong park, Se Jung Kwon, Dongsoo Lee, Jae-joon Kim.
    ICLR 2023.
  6. AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models.
    Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha, Nako Sung, Dongsoo Lee.
    Findings of EMNLP 2022.

Reviewing

  • International Conference on Machine Learning
  • Neural Information Processing Systems
  • International Conference on Learning Representations
  • Association for Computational Linguistics.