关注
Sangmin Bae
标题
引用次数
引用次数
年份
Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
G Lee*, M Jeong*, Y Shin, S Bae, SY Yun
NeurIPS 2022 (arXiv preprint arXiv:2106.03097), 2021
1082021
Mixco: Mix-up contrastive learning for visual representation
S Kim*, G Lee*, S Bae*, SY Yun
NeurIPS Workshop 2020 (arXiv preprint arXiv:2010.06300), 2020
682020
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding
S Bae*, J Ko*, H Song, SY Yun
EMNLP 2023 Long (arXiv preprint arXiv:2310.05424), 2023
212023
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
S Bae*, JW Kim*, WY Cho, H Baek, S Son, B Lee, C Ha, K Tae, S Kim, ...
INTERSPEECH 2023 (arXiv preprint arXiv:2305.14032), 2023
10*2023
Accurate and fast federated learning via combinatorial multi-armed bandits
T Kim*, S Bae*, J Lee, S Yun
arXiv preprint arXiv:2012.03270, 2020
92020
Re-thinking Federated Active Learning based on Inter-class Diversity
SM Kim*, S Bae*, H Song, SY Yun
CVPR 2023 (arXiv preprint arXiv:2303.12317), 2023
8*2023
Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network
S Bae*, S Kim*, J Ko, G Lee, S Noh, SY Yun
AAAI 2023 (arXiv preprint arXiv:2106.15499), 2021
72021
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
S Kim*, S Bae*, SY Yun
CVPR 2023 (arXiv preprint arXiv:2303.11101), 2023
52023
Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance
JW Kim, C Yoon, M Toikkanen, S Bae, HY Jung
NeurIPS Workshop 2023 (arXiv preprint arXiv:2311.06480), 2023
32023
Stethoscope-guided Supervised Contrastive Learning for Cross-domain Adaptation on Respiratory Sound Classification
JW Kim, S Bae, WY Cho, B Lee, HY Jung
ICASSP 2024 (arXiv preprint arXiv:2312.09603), 2023
22023
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Y Kim, J Yoon, S Ye, S Bae, N Ho, SJ Hwang, S Yun
NAACL 2024 (https://arxiv.org/abs/2311.08106), 2023
22023
RepAugment: Input-Agnostic Representation-Level Augmentation for Respiratory Sound Classification
JW Kim, M Toikkanen, S Bae, M Kim, HY Jung
EMBC 2024 (arXiv preprint arXiv:2405.02996), 2024
12024
Fine-Tuning the Retrieval Mechanism for Tabular Deep Learning
F Breejen, S Bae, S Cha, TY Kim, SH Koh, SY Yun
NeurIPS Workshop 2023 (arXiv preprint arXiv:2311.07343), 2023
1*2023
SIPA: A simple framework for efficient networks
G Lee*, S Bae*, J Oh, SY Yun
ICDM Workshop 2020, 2020
12020
Block Transformer: Global-to-Local Language Modeling for Fast Inference
N Ho*, S Bae*, T Kim, H Jo, Y Kim, T Schuster, A Fisch, J Thorne, SY Yun
arXiv preprint arXiv:2406.02657, 2024
2024
Why In-Context Learning Transformers are Tabular Data Classifiers
F Breejen, S Bae, S Cha, SY Yun
arXiv preprint arXiv:2405.13396, 2024
2024
Federated learning system for improved representation, federated learning method, and recording medium storing instructions to perform federated learning method
YUN Seyoung, KIM Seongyoon, W Chung, BAE SangMin
US Patent App. 18/472,393, 2024
2024
System, method, and computer-readable storage medium for federated learning of local model based on learning direction of global model
GH Lee, MC Jeong, SY Yun, SM Bae, JY Ahn, SY Kim, WJ Chung
US Patent App. 17/974,545, 2023
2023
Federated learning system for performing individual data customized federated learning, method for federated learning, and client aratus for performing same
JH Oh, SM Kim, SY Yun, SM Bae, JW Shin, SY Kim, WJ Chung
US Patent App. 17/975,664, 2023
2023
系统目前无法执行此操作,请稍后再试。
文章 1–19