关注
Meng Luo
标题
引用次数
引用次数
年份
A Survey on Benchmarks of Multimodal Large Language Models
J Li, W Lu, H Fei, M Luo, M Dai, M Xia, Y Jin, Z Gan, D Qi, C Fu, Y Tai, ...
arXiv preprint arXiv:2408.08632, 2024
142024
Panosent: A panoptic sextuple extraction benchmark for multimodal conversational aspect-based sentiment analysis
M Luo, H Fei, B Li, S Wu, Q Liu, S Poria, E Cambria, ML Lee, W Hsu
ACM MM 2024 (oral), 7667-7676, 2024
72024
Towards class-balanced privacy preserving heterogeneous model aggregation
X Pang, Z Wang, Z He, P Sun, M Luo, J Ren, K Ren
IEEE TDSC, 2022
62022
NUS-Emo at SemEval-2024 Task 3: Instruction-Tuning LLM for Multimodal Emotion-Cause Analysis in Conversations
M Luo, H Zhang, S Wu, B Li, H Han, H Fei
SemEval-2024, 2024
52024
Effi-Code: Unleashing Code Efficiency in Language Models
D Huang, G Zeng, J Dai, M Luo, H Weng, Y Qing, H Cui, Z Guo, J Zhang
arXiv preprint arXiv:2410.10209v1, 2024
22024
PAD: Personalized Alignment at Decoding-Time
R Chen, X Zhang, M Luo, W Chai, Z Liu
ICLR 2025, 2024
22024
Aristotle: Mastering Logical Reasoning with A Logic-Complete Decompose-Search-Resolve Framework
J Xu, H Fei, M Luo, Q Liu, L Pan, WY Wang, P Nakov, ML Lee, W Hsu
arXiv preprint arXiv:2412.16953, 2024
2024
Fine-grained Structural Hallucination Detection for Unified Visual Comprehension and Generation in Multimodal LLM
H Fei, M Luo, J Xu, S Wu, W Ji, ML Lee, W Hsu
ACM MM 2024 MIS Workshop, 2024
2024
Towards Multimodal Empathetic Response Generation: A Rich Text-Speech-Vision Avatar-based Benchmark
H Zhang, Z Meng, M Luo, H Han, L Liao, E Cambria, H Fei
WWW 2025, 0
系统目前无法执行此操作,请稍后再试。
文章 1–9