Papers

We mainly target top-tier ML conferences, and other domain-specific venues that involve ML.


2024 #

Decoding with Limited Teacher Supervision Requires Understanding When to Trust the Teacher
Hyunjong Ok, Jegwang Ryu and Jaeho Lee
EMNLP 2024

Prefixing Attention Sinks can Mitigate Activation Outliers for Large Language Model Quantization
Seungwoo Son, Wonpyo Park, Woohyun Han, Kyuyeun Kim, and Jaeho Lee
EMNLP 2024

Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization
Sungbin Shin, Wonpyo Park, Jaeho Lee, and Namhoon Lee
EMNLP 2024 (Short)

The Role of Masking for Efficient Supervised Knowledge Distillation of Vision Transformers
Seungwoo Son, Jegwang Ryu, Namhoon Lee, and Jaeho Lee
ECCV 2024 (ICLR 2023 workshop: Sparsity in Neural Networks, IPIU 2023 Oral 🥉)
project page, code

Neural Image Compression with Text-guided Encoding for both Pixel-level and Perceptual Fidelity
Hagyeong Lee1st, Minkyu Kim1st, Jun-Hyuk Kim, Seungeon Kim, Dokwan Oh, and Jaeho Lee
ICML 2024
project page, code

Hybrid Neural Representations for Spherical Data
Hyomin Kim, Yunhui Jang, Jaeho Lee, and Sungsoo Ahn
ICML 2024

In Search of a Data Transformation that Accelerates Neural Field Training
Junwon Seo1st, Sangyoon Lee1st, Kwang In Kim, and Jaeho Lee
CVPR 2024 Oral (top 0.78%) (NeurIPS 2023 Workshop: Attributing Model Behavior at Scale)
code demo

Discovering and Mitigating Visual Biases through Keyword Explanation
Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, and Jinwoo Shin
CVPR 2024 Highlight (top 2.8%) (ICML 2023 Workshop: Spurious Correlations, Invariance, and Stability)
code

SCANNER: Knowledge-Enhanced Approach for Robust Multi-modal Named Entity Recognition of Unseen Entities
Hyunjong Ok, Taeho Kil, Sukmin Seo, and Jaeho Lee
NAACL 2024

Few-shot Unlearning
Youngsik Yoon, Jinhwan Nam, Hyojeong Yun, Jaeho Lee, Dongwoo Kim, and Jungseul Ok
IEEE S&P 2024

Attention-aware Semantic Communications for Collaborative Inference
Jiwoong Im, Nayoung Kwon, Taewoo Park, Jiheon Woo, Jaeho Lee, and Yongjune Kim
IEEE IoT Journal (also at IEEE Communication Theory Workshop 2024)

Towards Federated Low-Rank Adaptation with Rank-Heterogeneous Communication
Yuji Byun and Jaeho Lee
NeurIPS 2024 Workshop: Adaptive Foundation Models

AudioBERT: Audio Knowledge Augmented Language Model
Hyunjong Ok, Suho Yoo, and Jaeho Lee
arXiv 2409.08199
code, dataset

Constructing a Singing Style Captioning Dataset
Hyunjong Ok and Jaeho Lee
arXiv 2409.09866
dataset

Fast Training of Sinusoidal Neural Fields via Scaling Initialization
Taesun Yeom, Sangyoon Lee, and Jaeho Lee
arXiv 2410.04779

2023 #

Learning Large-scale Neural Fields via Context Pruned Meta-learning
Jihoon Tack, Subin Kim, Sihyun Yu, Jaeho Lee, Jinwoo Shin, and Jonathan R. Schwarz
NeurIPS 2023 (ICLR 2023 Workshop: Neural Fields across Fields)
code

Modality-Agnostic Variational Compression of Implicit Neural Representations
Jonathan R. Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, and Jinwoo Shin
ICML 2023 (ICLR 2023 Workshop: Neural Fields across Fields)

Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling
Junhyun Nam, Sangwoo Mo, Jaeho Lee, and Jinwoo Shin
TMLR 2023 (ICML 2023 Workshop: Spurious Correlations, Invariance, and Stability)

Semi-Ensemble: A Simple Approach to Over-Parameterize Model Interpolation
Jiwoon Lee and Jaeho Lee
NeurIPS 2023 Workshop: Unifying Representations in Neural Models

On the Effectiveness of Sharpness-aware Minimization with Large Mini-batches
Jinseok Chung, Seonghwan Park, Jaeho Lee, and Namhoon Lee
ICML 2023 Workshop: High-Dimensional Learning Dynamics

Communication-Efficient Split Learning via Adaptive Feature-wise Compression
Yongjeong Oh, Jaeho Lee, Christopher G. Brinton, and Yo-Seb Jeon
arXiv 2307.10805

Debiased Distillation by Transplanting the Last Layer
Jiwoon Lee and Jaeho Lee
arXiv 2302.11187 (IPIU 2023)

2022 #

Scalable Neural Video Representations with Learnable Positional Features
Subin Kim, Sihyun Yu, Jaeho Lee, and Jinwoo Shin
NeurIPS 2022
project page

Meta-learning with Self-improving Momentum Targets
Jihoon Tack, Jongjin Park, Hankook Lee, Jaeho Lee and Jinwoo Shin
NeurIPS 2022

Spread Spurious Attribute: Improving Worst-Group Accuracy with Spurious Attribute Estimation
Junhyun Nam, Jaehyung Kim, Jaeho Lee, and Jinwoo Shin
ICLR 2022

Zero-shot Blind Image Denoising via Implicit Neural Representations
Chaewon Kim, Jaeho Lee, and Jinwoo Shin
arXiv 2204.02405

2021 #

Meta-learning Sparse Implicit Neural Representations
Jaeho Lee, Jihoon Tack, Namhoon Lee, and Jinwoo Shin
NeurIPS 2021

Co2L: Contrastive Continual Learning
Hyuntak Cha, Jaeho Lee, and Jinwoo Shin
ICCV 2021

Provable Memorization via Deep Neural Networks using Sub-linear Parameters
Sejun Park, Jaeho Lee, Chulhee Yun, and Jinwoo Shin
COLT 2021 (DeepMath 2020 Oral)

Minimum Width for Universal Approximation
Sejun Park, Chulhee Yun, Jaeho Lee, and Jinwoo Shin
ICLR 2021 Spotlight (DeepMath 2020 Oral)

Layer-adaptive Sparsity for the Magnitude-based Pruning
Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin
ICLR 2021

MASKER: Masked Keyword Regularization for Reliable Text Generation
Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, and Jinwoo Shin
AAAI 2021

Greedyprune: Layer-wise Optimization Algorithms for Magnitude-based Pruning
Vinoth Nandakumar and Jaeho Lee
Sparse Neural Network Workshop 2021

2020 #

Learning Bounds for Risk-sensitive Learning
Jaeho Lee, Sejun Park, and Jinwoo Shin
NeurIPS 2020

Learning from Failure: Training Debiased Classifier from Biased Classifier
Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin
NeurIPS 2020

Lookahead: A Far-sighted Alternative of Magnitude-based Pruning
Sejun Park, Jaeho Lee, Sangwoo Mo, and Jinwoo Shin
ICLR 2020

Pre-2020 #

Learning Finite-dimensional Coding Schemes with Nonlinear Reconstruction Maps
Jaeho Lee and Maxim Raginsky
SIMODS 2019

Minimax Statistical Learning with Wasserstein Distances
Jaeho Lee and Maxim Raginsky
NeurIPS 2018 Spotlight

On MMSE Estimation from Quantized Observations in the Nonasymptotic Regime
Jaeho Lee, Maxim Raginsky, and Pierre Moulin
ISIT 2015

Domestic Posters 🐯 #


An Empirical Study on the Bias of Generative Image Compression #

Hagyeong Lee and Jaeho Lee
IPIU 2023

Is Sparse Identification Model Sufficiently Biased? #

Junwon Seo and Jaeho Lee
IPIU 2023