关注
Jung-Woo Ha
Jung-Woo Ha
Research Fellow@NAVER AI Lab, Head of Future AI Center@NAVER, Adj. Prof. @HKUST
在 navercorp.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
Y Choi, M Choi, M Kim, JW Ha, S Kim, J Choo
CVPR 2018, 2018
43742018
StarGAN v2: Diverse Image Synthesis for Multiple Domains
Y Choi, Y Uh, J Yoo, JW Ha
Proceedings of the IEEE/CVF Conferences on Computer Vision and Pattern …, 2020
17412020
Hadamard product for low-rank bilinear pooling
JH Kim, KW On, J Kim, JW Ha, BT Zhang
ICLR 2017, 2017
8392017
Dual attention networks for multimodal reasoning and matching
H Nam, JW Ha, J Kim
CVPR 2017, 2017
8122017
Overcoming Catastrophic Forgetting by Incremental Moment Matching
SW Lee, JW Kim, JH Jeon, JW Ha, BT Zhang
NIPS 2017, 2017
6742017
Phase-Aware Speech Enhancement with Deep Complex U-Net
HS Choi, J Kim, J Huh, A Kim, JW Ha, K Lee
ICLR 2019 (to appear), 2019
3812019
Multimodal residual learning for visual qa
JH Kim, SW Lee, D Kwak, MO Heo, J Kim, JW Ha, BT Zhang
Advances in Neural Information Processing Systems, 361-369, 2016
3742016
Photorealistic Style Transfer via Wavelet Transforms
J Yoo, Y Uh, S Chun, B Kang, JW Ha
arXiv preprint arXiv:1903.09760 (ICCV 2019), 2019
3692019
Rainbow Memory: Continual Learning with a Memory of Diverse Samples
J Bang, H Kim, YJ Yoo, JW Ha, J Choi
arXiv preprint arXiv:2103.17230 (CVPR 2021), 2021
2922021
KLUE: Korean Language Understanding Evaluation
S Park, J Moon, S Kim, WI Cho, J Han, J Park, C Song, J Kim, Y Song, ...
arxiv preprinting arXiv:2105.09680 (NeurIPS 2021 Dataset and Benchmark Track), 2021
2442021
AdamP: Slowing down the weight norm increase in momentum-based optimizers
B Heo, S Chun, SJ Oh, D Han, S Yun, Y Uh, JW Ha
arXiv preprint arXiv:2006.08217 (ICLR 2021), 2021
183*2021
Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks
S Yu, J Tack, S Mo, H Kim, J Kim, JW Ha, J Shin
International Conference on Learning Representations (ICLR 2022), 2022
1782022
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
X Gu, K Cho, JW Ha, S Kim
arXiv:1805.12352 (ICLR 2019), 2019
1632019
Nsml: Meet the mlaas platform with a real-world case study
H Kim, M Kim, D Seo, J Kim, H Park, S Park, H Jo, KH Kim, Y Yang, Y Kim, ...
arXiv preprint arXiv:1810.09957, 2018
1052018
Dataset Condensation via Efficient Synthetic-Data Parameterization
JH Kim, J Kim, SJ Oh, S Yun, H Song, J Jeong, JW Ha, HO Song
arXiv preprint arXiv:2205.14959 (ICML 2022), 2022
1032022
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
B Kim, HS Kim, SW Lee, G Lee, D Kwak, DH Jeon, S Park, S Kim, S Kim, ...
arXiv preprint arXiv:2109.04650 (EMNLP 2021), 2021
1002021
Reinforcement learning based recommender system using biclustering technique
S Choi, H Ha, U Hwang, C Kim, JW Ha, S Yoon
arXiv preprint arXiv:1801.05532, 2018
892018
Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs
D Hwang, J Park, S Kwon, KM Kim, JW Ha, HJ Kim
arXiv preprint arXiv:2007.08294 (NeurIPS 2020), 2020
792020
NSML: A Machine Learning Platform That Enables You to Focus on Your Models
N Sung, M Kim, H Jo, Y Yang, J Kim, L Lausen, Y Kim, G Lee, D Kwak, ...
arXiv:1712.05902, https://arxiv.org/abs/1712.05902, 2017
792017
DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances
X Gu, KM Yoo, JW Ha
arXiv preprint arXiv:2012.01775 (AAAI 2021), 2021
782021
系统目前无法执行此操作,请稍后再试。
文章 1–20