StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation Y Choi, M Choi, M Kim, JW Ha, S Kim, J Choo CVPR 2018, 2018 | 4752 | 2018 |
StarGAN v2: Diverse Image Synthesis for Multiple Domains Y Choi, Y Uh, J Yoo, JW Ha Proceedings of the IEEE/CVF Conferences on Computer Vision and Pattern …, 2020 | 1996 | 2020 |
Hadamard product for low-rank bilinear pooling JH Kim, KW On, J Kim, JW Ha, BT Zhang ICLR 2017, 2017 | 886 | 2017 |
Dual attention networks for multimodal reasoning and matching H Nam, JW Ha, J Kim CVPR 2017, 2017 | 847 | 2017 |
Overcoming Catastrophic Forgetting by Incremental Moment Matching SW Lee, JW Kim, JH Jeon, JW Ha, BT Zhang NIPS 2017, 2017 | 743 | 2017 |
Phase-Aware Speech Enhancement with Deep Complex U-Net HS Choi, J Kim, J Huh, A Kim, JW Ha, K Lee ICLR 2019 (to appear), 2019 | 425 | 2019 |
Photorealistic Style Transfer via Wavelet Transforms J Yoo, Y Uh, S Chun, B Kang, JW Ha arXiv preprint arXiv:1903.09760 (ICCV 2019), 2019 | 405 | 2019 |
Multimodal residual learning for visual qa JH Kim, SW Lee, D Kwak, MO Heo, J Kim, JW Ha, BT Zhang Advances in Neural Information Processing Systems, 361-369, 2016 | 387 | 2016 |
Rainbow Memory: Continual Learning with a Memory of Diverse Samples J Bang, H Kim, YJ Yoo, JW Ha, J Choi arXiv preprint arXiv:2103.17230 (CVPR 2021), 2021 | 367 | 2021 |
KLUE: Korean Language Understanding Evaluation S Park, J Moon, S Kim, WI Cho, J Han, J Park, C Song, J Kim, Y Song, ... arxiv preprinting arXiv:2105.09680 (NeurIPS 2021 Dataset and Benchmark Track), 2021 | 309 | 2021 |
Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks S Yu, J Tack, S Mo, H Kim, J Kim, JW Ha, J Shin International Conference on Learning Representations (ICLR 2022), 2022 | 206 | 2022 |
AdamP: Slowing down the weight norm increase in momentum-based optimizers B Heo, S Chun, SJ Oh, D Han, S Yun, Y Uh, JW Ha arXiv preprint arXiv:2006.08217 (ICLR 2021), 2021 | 204* | 2021 |
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder X Gu, K Cho, JW Ha, S Kim arXiv:1805.12352 (ICLR 2019), 2019 | 166 | 2019 |
Dataset Condensation via Efficient Synthetic-Data Parameterization JH Kim, J Kim, SJ Oh, S Yun, H Song, J Jeong, JW Ha, HO Song arXiv preprint arXiv:2205.14959 (ICML 2022), 2022 | 159 | 2022 |
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers B Kim, HS Kim, SW Lee, G Lee, D Kwak, DH Jeon, S Park, S Kim, S Kim, ... arXiv preprint arXiv:2109.04650 (EMNLP 2021), 2021 | 118 | 2021 |
Nsml: Meet the mlaas platform with a real-world case study H Kim, M Kim, D Seo, J Kim, H Park, S Park, H Jo, KH Kim, Y Yang, Y Kim, ... arXiv preprint arXiv:1810.09957, 2018 | 111 | 2018 |
Reinforcement learning based recommender system using biclustering technique S Choi, H Ha, U Hwang, C Kim, JW Ha, S Yoon arXiv preprint arXiv:1801.05532, 2018 | 93 | 2018 |
Dense Text-to-Image Generation with Attention Modulation Y Kim, J Lee, JH Kim, JW Ha, JY Zhu arXiv preprint arXiv:2308.12964 (ICCV 2023), 2023 | 88 | 2023 |
Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs D Hwang, J Park, S Kwon, KM Kim, JW Ha, HJ Kim arXiv preprint arXiv:2007.08294 (NeurIPS 2020), 2020 | 88* | 2020 |
NSML: A Machine Learning Platform That Enables You to Focus on Your Models N Sung, M Kim, H Jo, Y Yang, J Kim, L Lausen, Y Kim, G Lee, D Kwak, ... arXiv:1712.05902, https://arxiv.org/abs/1712.05902, 2017 | 85 | 2017 |