关注
JungMin Yun
JungMin Yun
在 cau.ac.kr 的电子邮件经过验证
标题
引用次数
引用次数
年份
Focus on the core: Efficient attention via pruned token compression for document classification
J Yun, M Kim, Y Kim
arXiv preprint arXiv:2406.01283, 2024
62024
Domain-adaptive vision transformers for generalizing across visual domains
Y Cho, J Yun, J Kwon, Y Kim
IEEE Access, 2023
32023
Multi-News+: Cost-efficient Dataset Cleansing via LLM-based Data Annotation
J Choi, J Yun, K Jin, YB Kim
arXiv preprint arXiv:2404.09682, 2024
22024
VolDoGer: LLM-assisted Datasets for Domain Generalization in Vision-Language Tasks
J Choi, J Kwon, JM Yun, S Yu, YB Kim
arXiv preprint arXiv:2407.19795, 2024
2024
UniGen: Universal Domain Generalization for Sentiment Classification via Zero-shot Dataset Generation
J Choi, Y Kim, S Yu, JM Yun, YB Kim
arXiv preprint arXiv:2405.01022, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–5