Work Experience

Publications

*equal contribution

Lead author

  1. Jeonghoon Kim, Byeongchan Lee, Cheonbok Park, Yeontaek Oh, Beomjun Kim, Taehwan Yoo, Seongjin Shin, Dongyoon Han, Jinwoo Shin, Kang Min Yoo. “Peri-LN: Revisiting Normalization Layer in the Transformer Architecture.” ICML 2025 (Poster).
  2. Jung Hyun Lee, Jeonghoon Kim, June Yong Yang, Se Jung Kwon, Eunho Yang, Dongsoo Lee. “LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices.” NAACL 2025 (Poster).
  3. Jung Hwan Heo, Jeonghoon Kim, Beomseok Kwon, Byeongwook Kim, Se Jung Kwon, Dongsoo Lee. “Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models.” ICLR2024 (Poster).
  4. Jeonghoon Kim, Jung Hyun Lee, Sungdong Kim, Joonsuk Park, Kang Min Yoo, Se Jung Kwon, Dongsoo Lee. “Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization.” NeurIPS 2023 (Poster).
  5. Jung Hyun Lee, Jeonghoon Kim, Se Jung Kwon, Dongsoo Lee. “FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization.” ICML 2023 (Poster).
  6. Cheonbok Park, Jeonghoon Kim, Joosung Lee, Sanghwan Bae, Jaegul Choo, Kang Min Yoo. “Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language Models.” Arxiv (under review).

Co-author

  1. Gunho Park, Baeseong Park, Minsub Kim, Sungjae Lee, Jeonghoon Kim, Beomseok Kwon, Se Jung Kwon, Byeongwook Kim, Youngjoo Lee, Dongsoo Lee. “LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models.” ICLR2024 (Poster).
  2. Yulhwa Kim, Jaeyong Jang, Jehun Lee, Jihoon Park, Jeonghoon Kim, Byeongwook Kim, Baeseong park, Se Jung Kwon, Dongsoo Lee, Jae-joon Kim. “Winning Both the Accuracy of Floating Point Activation and the Simplicity of Integer Arithmetic.” ICLR 2023 (Poster).
  3. Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha, Nako Sung, Dongsoo Lee. “AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models.” Findings of EMNLP 2022.
  4. HyperCLOVA X Team. “HyperCLOVA X Technical Report” Arxiv (Technical report).
  5. Hyunseok Lee, Jeonghoon Kim, Beomjun Kim, Jihoon Tack, Chansong Jo, Jaehong Lee, Cheonbok Park, Sookyo In, Jinwoo Shin, Kang Min Yoo. “ReGUIDE: Data Efficient GUI Grounding via Spatial Reasoning and Search.” Arxiv (under review).
  6. HyperCLOVA X Team. “HyperCLOVA X THINK Technical Report” Arxiv (Technical report).

Reviewing

Education