Past Graduate Students #
Seungwoo Son #
M.S. @ POSTECH EE (22.03–24.08): “The role of masking for efficient supervised knowledge distillation of vision transformers”
Expertise. Model compression of large transformers, using quantization and distillation
Next step. Samsung Research
linkedin, mail
#
Hagyeong Lee #
M.S. @ POSTECH EE (22.09–24.08): “Neural image compression with text-guided encoding for both pixel-level and perceptual fidelity”
Expertise. Multimodal data compression
Next step. (TBD)
webpage, mail, twitter
#
Jiwoon Lee #
M.S. @ POSTECH EE (22.03–24.02): “Semi-ensemble: A simple approach to over-parameterized model interpolation”
Expertise Model Merging, federated Learning and knowledge distillation
Next step. FASOO
mail
#
Junwon Seo #
M.S. @ POSTECH EE (22.03–24.02): “In search of a data transformation that accelerates neural field training”
Expertise. Fast training of neural fields
Next step. LivsMed (μ λ¬Έμ°κ΅¬μμ)
mail
#
Past Interns #
Seoyun Jeong (Summer ‘24)
Collaborative Decoding with Compressed Models
Yewon Hwang (Summer ‘24)
Quantizing Diffusion Model
Sangbeom Ha (Summer ‘23–Spring ‘24)
Large-Scale Model Quantization
Inkwan Hwang (Fall ‘23–Summer ‘24; now at π«‘)
Large-Scale Model Pruning
webpage
Taesun Yeom (Winter ‘23–Spring ‘24; Now at EffL)
Training and Inference Efficiency for Neural Fields
Minhee Lee (Winter ‘23)
Speculative Decoding
Jegwang Ryu (Summer ‘23, Spring ‘24; Now at EffL)
Accelerated Training by Masking
Seunghyun Kim (Spring ‘24; Now at EffL)
Efficient RAG LLM
Wonjun Cho (Spring ‘24)
Model Compression
Subeom Heo (Spring–Summer ‘24)
Accelerating Video Diffusion Models
Jeonghyun Choi (Winter ‘23)
Properties of Data Augmentation
Minjae Park (Winter ‘23; now at EffL)
Faster State-Space Models
Minyoung Kang (Fall–Winter ‘23)
Neural Cellular Automata
Yousung Roh (Fall ‘23–Winter ‘23)
Byte-Processing Neural Networks
Jiyun Bae (Summer–Fall ‘23; now at EffL)
Visual Prompt Tuning
Sangyoon Lee (Summaer–Fall ‘23; now at EffL)
Fast Neural Field Generation
Dohyun Kim (Summer ‘23; now at π«‘)
Zeroth Order Optimization
Juyun Wee (Spring ‘23 β EffL)
Time-Series Modeling with Transformers
Soochang Song (Winter ‘22 – Spring ‘23; now exchange student at π«π·)
Model Interpolation with SIRENs
Jeonghun Cho (Winter ‘22)
Pruning Models under Challenging Scenarios
Seyeon Park (Winter ‘21 β Yonsei)
Efficient Attentions for Language Models
Hagyeong Lee (Winter ‘21 β EffL)
Data Compression with Implicit Neural Representations