Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks S Arora, S Du, W Hu, Z Li, R Wang International Conference on Machine Learning, 322-332, 2019 | 810 | 2019 |
On exact computation with an infinitely wide neural net S Arora, SS Du, W Hu, Z Li, RR Salakhutdinov, R Wang Advances in neural information processing systems 32, 2019 | 737 | 2019 |
Graph neural tangent kernel: Fusing graph neural networks with graph kernels SS Du, K Hou, RR Salakhutdinov, B Poczos, R Wang, K Xu Advances in neural information processing systems 32, 2019 | 201 | 2019 |
Is a good representation sufficient for sample efficient reinforcement learning? SS Du, SM Kakade, R Wang, LF Yang arXiv preprint arXiv:1910.03016, 2019 | 181 | 2019 |
Reinforcement learning with general value function approximation: Provably efficient approach via bounded eluder dimension R Wang, RR Salakhutdinov, L Yang Advances in Neural Information Processing Systems 33, 6123-6135, 2020 | 165* | 2020 |
Harnessing the power of infinitely wide deep nets on small-data tasks S Arora, SS Du, Z Li, R Salakhutdinov, R Wang, D Yu arXiv preprint arXiv:1910.01663, 2019 | 147 | 2019 |
Bilinear classes: A structural framework for provable generalization in rl S Du, S Kakade, J Lee, S Lovett, G Mahajan, W Sun, R Wang International Conference on Machine Learning, 2826-2836, 2021 | 141 | 2021 |
Optimism in reinforcement learning with generalized linear function approximation Y Wang, R Wang, SS Du, A Krishnamurthy arXiv preprint arXiv:1912.04136, 2019 | 126 | 2019 |
What are the statistical limits of offline RL with linear function approximation? R Wang, DP Foster, SM Kakade arXiv preprint arXiv:2010.11895, 2020 | 121 | 2020 |
Enhanced convolutional neural tangent kernels Z Li, R Wang, D Yu, SS Du, W Hu, R Salakhutdinov, S Arora arXiv preprint arXiv:1911.00809, 2019 | 104 | 2019 |
On reward-free reinforcement learning with linear function approximation R Wang, SS Du, L Yang, RR Salakhutdinov Advances in neural information processing systems 33, 17816-17826, 2020 | 87 | 2020 |
Provably efficient Q-learning with function approximation via distribution shift error checking oracle SS Du, Y Luo, R Wang, H Zhang Advances in Neural Information Processing Systems 32, 2019 | 86 | 2019 |
Is long horizon RL more difficult than short horizon RL? R Wang, SS Du, L Yang, S Kakade Advances in Neural Information Processing Systems 33, 9075-9085, 2020 | 60* | 2020 |
Agnostic -learning with Function Approximation in Deterministic Systems: Near-Optimal Bounds on Approximation Error and Sample Complexity SS Du, JD Lee, G Mahajan, R Wang Advances in Neural Information Processing Systems 33, 22327-22337, 2020 | 53* | 2020 |
Nearly optimal sampling algorithms for combinatorial pure exploration L Chen, A Gupta, J Li, M Qiao, R Wang Conference on Learning Theory, 482-534, 2017 | 49 | 2017 |
Exponential separations in the energy complexity of leader election YJ Chang, T Kopelowitz, S Pettie, R Wang, W Zhan ACM Transactions on Algorithms (TALG) 15 (4), 1-31, 2019 | 46 | 2019 |
k-regret minimizing set: Efficient algorithms and hardness W Cao, J Li, H Wang, K Wang, R Wang, R Chi-Wing Wong, W Zhan 20th International Conference on Database Theory (ICDT 2017), 2017 | 38 | 2017 |
An exponential lower bound for linearly realizable mdp with constant suboptimality gap Y Wang, R Wang, S Kakade Advances in Neural Information Processing Systems 34, 9521-9533, 2021 | 36 | 2021 |
Dimensionality reduction for tukey regression K Clarkson, R Wang, D Woodruff International Conference on Machine Learning, 1262-1271, 2019 | 34 | 2019 |
Tight Bounds for ℓ1 Oblivious Subspace Embeddings R Wang, DP Woodruff ACM Transactions on Algorithms (TALG) 18 (1), 1-32, 2022 | 31 | 2022 |