Papers

We mainly target top-tier ML conferences, e.g., NeurIPS / ICML / ICLR.
Sometimes, we also submit to domain-specific venues that involve ML, e.g., Vision/Language/Speech.

2023 #

In Search of a Data Transformation that Accelerates Neural Field Training
Junwon Seo, Sangyoon Lee, and Jaeho Lee
NeurIPS 2023 Workshop: Attributing Model Behavior at Scale

Semi-Ensemble: A Simple Approach to Over-Parameterize Model Interpolation
Jiwoon Lee and Jaeho Lee
NeurIPS 2023 Workshop: Unifying Representations in Neural Models

Learning Large-scale Neural Fields via Context Pruned Meta-learning
Jihoon Tack, Subin Kim, Sihyun Yu, Jaeho Lee, Jinwoo Shin, and Jonathan R. Schwarz
NeurIPS 2023 (ICLR 2023 Workshop: Neural Fields across Fields)

Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling
Junhyun Nam, Sangwoo Mo, Jaeho Lee, and Jinwoo Shin
TMLR 2023 (ICML 2023 Workshop: Spurious Correlations, Invariance, and Stability)

Modality-Agnostic Variational Compression of Implicit Neural Representations
Jonathan R. Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, and Jinwoo Shin
ICML 2023 (ICLR 2023 Workshop: Neural Fields across Fields)

Bias-to-Text: Debiasing Unknown Visual Biases through Language Interpretation
Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, and Jinwoo Shin
ICML 2023 Workshop: Spurious Correlations, Invariance, and Stability

On the Effectiveness of Sharpness-aware Minimization with Large Mini-batches
Jinseok Chung, Seonghwan Park, Jaeho Lee, and Namhoon Lee
ICML 2023 Workshop: High-Dimensional Learning Dynamics

MaskedKD: Efficient Distillation of Vision Transformers with Masked Images
Seungwoo Son, Namhoon Lee, and Jaeho Lee
ICLR 2023 Workshop: Sparsity in Neural Networks (IPIU 2023 Oral 🥉)

Prefix Tuning for Automated Audio Captioning
Minkyu Kim, Kim Sung-Bin, and Tae-Hyun Oh
ICASSP 2023 Oral

Communication-Efficient Split Learning via Adaptive Feature-wise Compression
Yongjeong Oh, Jaeho Lee, Christopher G. Brinton, and Yo-Seb Jeon
Under Review

Debiased Distillation by Transplanting the Last Layer
Jiwoon Lee and Jaeho Lee
arXiv preprint 2302.11187 (IPIU 2023)

2022 #

Scalable Neural Video Representations with Learnable Positional Features
Subin Kim, Sihyun Yu, Jaeho Lee, and Jinwoo Shin
NeurIPS 2022 (project page)

Meta-learning with Self-improving Momentum Targets
Jihoon Tack, Jongjin Park, Hankook Lee, Jaeho Lee and Jinwoo Shin
NeurIPS 2022

Spread Spurious Attribute: Improving Worst-Group Accuracy with Spurious Attribute Estimation
Junhyun Nam, Jaehyung Kim, Jaeho Lee, and Jinwoo Shin
ICLR 2022

Zero-shot Blind Image Denoising via Implicit Neural Representations
Chaewon Kim, Jaeho Lee, and Jinwoo Shin
arXiv preprint 2204.02405

2021 #

Meta-learning Sparse Implicit Neural Representations
Jaeho Lee, Jihoon Tack, Namhoon Lee, and Jinwoo Shin
NeurIPS 2021

Co2L: Contrastive Continual Learning
Hyuntak Cha, Jaeho Lee, and Jinwoo Shin
ICCV 2021

Provable Memorization via Deep Neural Networks using Sub-linear Parameters
Sejun Park, Jaeho Lee, Chulhee Yun, and Jinwoo Shin
COLT 2021 (DeepMath 2020 Oral)

Minimum Width for Universal Approximation
Sejun Park, Chulhee Yun, Jaeho Lee, and Jinwoo Shin
ICLR 2021 Spotlight (DeepMath 2020 Oral)

Layer-adaptive Sparsity for the Magnitude-based Pruning
Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, and Jinwoo Shin
ICLR 2021

MASKER: Masked Keyword Regularization for Reliable Text Generation
Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, and Jinwoo Shin
AAAI 2021

Greedyprune: Layer-wise Optimization Algorithms for Magnitude-based Pruning
Vinoth Nandakumar and Jaeho Lee
Sparse Neural Network Workshop 2021

2020 #

Learning Bounds for Risk-sensitive Learning
Jaeho Lee, Sejun Park, and Jinwoo Shin
NeurIPS 2020

Learning from Failure: Training Debiased Classifier from Biased Classifier
Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, and Jinwoo Shin
NeurIPS 2020

Lookahead: A Far-sighted Alternative of Magnitude-based Pruning
Sejun Park, Jaeho Lee, Sangwoo Mo, and Jinwoo Shin
ICLR 2020

Pre-2020 #

Learning Finite-dimensional Coding Schemes with Nonlinear Reconstruction Maps
Jaeho Lee and Maxim Raginsky
SIMODS 2019

Minimax Statistical Learning with Wasserstein Distances
Jaeho Lee and Maxim Raginsky
NeurIPS 2018 Spotlight

On MMSE Estimation from Quantized Observations in the Nonasymptotic Regime
Jaeho Lee, Maxim Raginsky, and Pierre Moulin
ISIT 2015

Domestic Posters #

An Empirical Study on the Bias of Generative Image Compression
Hagyeong Lee and Jaeho Lee
IPIU 2023

Is Sparse Identification Model Sufficiently Biased?
Junwon Seo and Jaeho Lee
IPIU 2023