Work Experience
- NAVER Cloud - Technical Lead, AI Researcher
- 2021.12 - Present @ Foundation Research Team
- Pre-training efficiency & Transformers architecture
- Multi-linguality and multi-modality of large-scale Transformers
- LG Energy Solution - Research Scientist
- 2020.11 - 2021.11 @ Business Strategy Division
- Nepes - Assistant Manager
- 2018.01 - 2020.10 @ Future Intelligence Division
- Alternative military service program as an researcher
Publications
*equal contribution
Lead author
- Jeonghoon Kim, Byeongchan Lee, Cheonbok Park, Yeontaek Oh, Beomjun Kim, Taehwan Yoo, Seongjin Shin, Dongyoon Han, Jinwoo Shin, Kang Min Yoo. “Peri-LN: Revisiting Normalization Layer in the Transformer Architecture.” ICML 2025 (Poster).
- Jung Hyun Lee, Jeonghoon Kim, June Yong Yang, Se Jung Kwon, Eunho Yang, Dongsoo Lee. “LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices.” NAACL 2025 (Poster).
- Jung Hwan Heo, Jeonghoon Kim, Beomseok Kwon, Byeongwook Kim, Se Jung Kwon, Dongsoo Lee. “Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models.” ICLR2024 (Poster).
- Jeonghoon Kim, Jung Hyun Lee, Sungdong Kim, Joonsuk Park, Kang Min Yoo, Se Jung Kwon, Dongsoo Lee. “Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization.” NeurIPS 2023 (Poster).
- Jung Hyun Lee, Jeonghoon Kim, Se Jung Kwon, Dongsoo Lee. “FlexRound: Learnable Rounding based on Element-wise Division for Post-Training Quantization.” ICML 2023 (Poster).
- Cheonbok Park, Jeonghoon Kim, Joosung Lee, Sanghwan Bae, Jaegul Choo, Kang Min Yoo. “Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language Models.” Arxiv (under review).
Co-author
- Gunho Park, Baeseong Park, Minsub Kim, Sungjae Lee, Jeonghoon Kim, Beomseok Kwon, Se Jung Kwon, Byeongwook Kim, Youngjoo Lee, Dongsoo Lee. “LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models.” ICLR2024 (Poster).
- Yulhwa Kim, Jaeyong Jang, Jehun Lee, Jihoon Park, Jeonghoon Kim, Byeongwook Kim, Baeseong park, Se Jung Kwon, Dongsoo Lee, Jae-joon Kim. “Winning Both the Accuracy of Floating Point Activation and the Simplicity of Integer Arithmetic.” ICLR 2023 (Poster).
- Se Jung Kwon, Jeonghoon Kim, Jeongin Bae, Kang Min Yoo, Jin-Hwa Kim, Baeseong Park, Byeongwook Kim, Jung-Woo Ha, Nako Sung, Dongsoo Lee. “AlphaTuning: Quantization-Aware Parameter-Efficient Adaptation of Large-Scale Pre-Trained Language Models.” Findings of EMNLP 2022.
- HyperCLOVA X Team. “HyperCLOVA X Technical Report” Arxiv (Technical report).
- Hyunseok Lee, Jeonghoon Kim, Beomjun Kim, Jihoon Tack, Chansong Jo, Jaehong Lee, Cheonbok Park, Sookyo In, Jinwoo Shin, Kang Min Yoo. “ReGUIDE: Data Efficient GUI Grounding via Spatial Reasoning and Search.” Arxiv (under review).
- HyperCLOVA X Team. “HyperCLOVA X THINK Technical Report” Arxiv (Technical report).
Reviewing
- International Conference on Learning Representations
- Neural Information Processing Systems
- International Conference on Learning Representations
- Association for Computational Linguistics.
Education
- Ph.D Student in Artificial Intelligence, Korea Advanced Institute of Science and Technology (KAIST), WIP
- M.S. in Control & Robotics Systems, Korea University, 2018