StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation Y Choi, M Choi, M Kim, JW Ha, S Kim, J Choo CVPR 2018, 2018 | 4374 | 2018 |
StarGAN v2: Diverse Image Synthesis for Multiple Domains Y Choi, Y Uh, J Yoo, JW Ha Proceedings of the IEEE/CVF Conferences on Computer Vision and Pattern …, 2020 | 1741 | 2020 |
Hadamard product for low-rank bilinear pooling JH Kim, KW On, J Kim, JW Ha, BT Zhang ICLR 2017, 2017 | 839 | 2017 |
Dual attention networks for multimodal reasoning and matching H Nam, JW Ha, J Kim CVPR 2017, 2017 | 812 | 2017 |
Overcoming Catastrophic Forgetting by Incremental Moment Matching SW Lee, JW Kim, JH Jeon, JW Ha, BT Zhang NIPS 2017, 2017 | 674 | 2017 |
Phase-Aware Speech Enhancement with Deep Complex U-Net HS Choi, J Kim, J Huh, A Kim, JW Ha, K Lee ICLR 2019 (to appear), 2019 | 381 | 2019 |
Multimodal residual learning for visual qa JH Kim, SW Lee, D Kwak, MO Heo, J Kim, JW Ha, BT Zhang Advances in Neural Information Processing Systems, 361-369, 2016 | 374 | 2016 |
Photorealistic Style Transfer via Wavelet Transforms J Yoo, Y Uh, S Chun, B Kang, JW Ha arXiv preprint arXiv:1903.09760 (ICCV 2019), 2019 | 369 | 2019 |
Rainbow Memory: Continual Learning with a Memory of Diverse Samples J Bang, H Kim, YJ Yoo, JW Ha, J Choi arXiv preprint arXiv:2103.17230 (CVPR 2021), 2021 | 292 | 2021 |
KLUE: Korean Language Understanding Evaluation S Park, J Moon, S Kim, WI Cho, J Han, J Park, C Song, J Kim, Y Song, ... arxiv preprinting arXiv:2105.09680 (NeurIPS 2021 Dataset and Benchmark Track), 2021 | 244 | 2021 |
AdamP: Slowing down the weight norm increase in momentum-based optimizers B Heo, S Chun, SJ Oh, D Han, S Yun, Y Uh, JW Ha arXiv preprint arXiv:2006.08217 (ICLR 2021), 2021 | 183* | 2021 |
Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks S Yu, J Tack, S Mo, H Kim, J Kim, JW Ha, J Shin International Conference on Learning Representations (ICLR 2022), 2022 | 178 | 2022 |
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder X Gu, K Cho, JW Ha, S Kim arXiv:1805.12352 (ICLR 2019), 2019 | 163 | 2019 |
Nsml: Meet the mlaas platform with a real-world case study H Kim, M Kim, D Seo, J Kim, H Park, S Park, H Jo, KH Kim, Y Yang, Y Kim, ... arXiv preprint arXiv:1810.09957, 2018 | 105 | 2018 |
Dataset Condensation via Efficient Synthetic-Data Parameterization JH Kim, J Kim, SJ Oh, S Yun, H Song, J Jeong, JW Ha, HO Song arXiv preprint arXiv:2205.14959 (ICML 2022), 2022 | 103 | 2022 |
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers B Kim, HS Kim, SW Lee, G Lee, D Kwak, DH Jeon, S Park, S Kim, S Kim, ... arXiv preprint arXiv:2109.04650 (EMNLP 2021), 2021 | 100 | 2021 |
Reinforcement learning based recommender system using biclustering technique S Choi, H Ha, U Hwang, C Kim, JW Ha, S Yoon arXiv preprint arXiv:1801.05532, 2018 | 89 | 2018 |
Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs D Hwang, J Park, S Kwon, KM Kim, JW Ha, HJ Kim arXiv preprint arXiv:2007.08294 (NeurIPS 2020), 2020 | 79 | 2020 |
NSML: A Machine Learning Platform That Enables You to Focus on Your Models N Sung, M Kim, H Jo, Y Yang, J Kim, L Lausen, Y Kim, G Lee, D Kwak, ... arXiv:1712.05902, https://arxiv.org/abs/1712.05902, 2017 | 79 | 2017 |
DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances X Gu, KM Yoo, JW Ha arXiv preprint arXiv:2012.01775 (AAAI 2021), 2021 | 78 | 2021 |