关注
Tian Xu
Tian Xu
在 lamda.nju.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Error bounds of imitating policies and environments
T Xu, Z Li, Y Yu
Advances in Neural Information Processing Systems 33, 15737-15749, 2020
982020
A survey on model-based reinforcement learning
FM Luo, T Xu, H Lai, XH Chen, W Zhang, Y Yu
Science China Information Sciences 67 (2), 121101, 2024
96*2024
Error bounds of imitating policies and environments for reinforcement learning
T Xu, Z Li, Y Yu
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (10), 6968 …, 2021
352021
Remax: A simple, effective, and efficient reinforcement learning method for aligning large language models
Z Li, T Xu, Y Zhang, Z Lin, Y Yu, R Sun, ZQ Luo
Forty-first International Conference on Machine Learning, 2023
16*2023
Rethinking ValueDice: Does it really improve performance?
Z Li, T Xu, Y Yu, ZQ Luo
arXiv preprint arXiv:2202.02468, 2022
142022
Policy optimization in rlhf: The impact of out-of-preference data
Z Li, T Xu, Y Yu
arXiv preprint arXiv:2312.10584, 2023
72023
Provably efficient adversarial imitation learning with unknown transitions
T Xu, Z Li, Y Yu, ZQ Luo
Uncertainty in Artificial Intelligence, 2367-2378, 2023
72023
Understanding adversarial imitation learning in small sample regime: A stage-coupled analysis
T Xu, Z Li, Y Yu, ZQ Luo
arXiv preprint arXiv:2208.01899, 2022
62022
Imitation learning from imperfection: Theoretical justifications and algorithms
Z Li, T Xu, Z Qin, Y Yu, ZQ Luo
Advances in Neural Information Processing Systems 36, 2024
52024
Yang Yu. Reward-consistent dynamics models are strongly generalizable for offline reinforcement learning
FM Luo, T Xu, X Cao
arXiv preprint arXiv:2310.05422, 2023
52023
On generalization of adversarial imitation learning and beyond
T Xu, Z Li, Y Yu, ZQ Luo
arXiv preprint arXiv:2106.10424, 2021
52021
Model gradient: unified model and policy learning in model-based reinforcement learning
C Jia, F Zhang, T Xu, JC Pang, Z Zhang, Y Yu
Frontiers of Computer Science 18 (4), 184339, 2024
32024
Policy Rehearsing: Training Generalizable Policies for Reinforcement Learning
C Jia, C Gao, H Yin, F Zhang, XH Chen, T Xu, L Yuan, Z Zhang, ZH Zhou, ...
The Twelfth International Conference on Learning Representations, 2024
22024
Theoretical analysis of offline imitation with supplementary dataset
Z Li, T Xu, Y Yu, ZQ Luo
arXiv preprint arXiv:2301.11687, 2023
22023
Nearly Minimax Optimal Adversarial Imitation Learning with Known and Unknown Transitions
T Xu, Z Li, Y Yu
CoRR abs/2106.10424, 2021
22021
Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning
FM Luo, T Xu, X Cao, Y Yu
arXiv preprint arXiv:2310.05422, 2023
12023
A Note on Target Q-learning For Solving Finite MDPs with A Generative Oracle
Z Li, T Xu, Y Yu
arXiv preprint arXiv:2203.11489, 2022
12022
Sparsity prior regularized Q-learning for sparse action tasks
JC Pang, T Xu, SY Jiang, YR Liu, Y Yu
arXiv preprint arXiv:2105.08666, 2021
12021
Offline Imitation Learning without Auxiliary High-quality Behavior Data
JJ Shao, HS Shi, T Xu, LZ Guo, Y Yu, YF Li
1
Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity
Z Li, C Chen, T Xu, Z Qin, J Xiao, R Sun, ZQ Luo
arXiv preprint arXiv:2408.16673, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–20