SYSTEM AND METHOD FOR EXTRACTING AND USING PROSODY FEATURES DW Stas Tiomkin US Patent 20170103748A1, 2017 | 79* | 2017 |
A hybrid text-to-speech system that combines concatenative and statistical synthesis units S Tiomkin, D Malah, S Shechtman, Z Kons IEEE transactions on audio, speech, and language processing 19 (5), 1278-1288, 2010 | 72 | 2010 |
A unified bellman equation for causal information and value in markov decision processes S Tiomkin, N Tishby arXiv preprint arXiv:1703.01585, 2017 | 38 | 2017 |
Ave: Assistance via empowerment Y Du, S Tiomkin, E Kiciman, D Polani, P Abbeel, A Dragan Advances in Neural Information Processing Systems 33, 4560-4571, 2020 | 36 | 2020 |
Dynamics generalization via information bottleneck in deep reinforcement learning X Lu, K Lee, P Abbeel, S Tiomkin arXiv preprint arXiv:2008.00614, 2020 | 24 | 2020 |
Control capacity of partially observable dynamic systems in continuous time S Tiomkin, D Polani, N Tishby arXiv preprint arXiv:1701.04984, 2017 | 19 | 2017 |
Efficient Empowerment Estimation for Unsupervised Stabilization ST Ruihan Zhao, Kevin Lu, Pieter Abbeel International Conference on Learning Representations, (ICLR), 2021, 2021 | 13* | 2021 |
Statistical text-to-speech synthesis based on segment-wise representation with a norm constraint S Tiomkin, D Malah, S Shechtman IEEE Transactions on Audio, Speech, and Language Processing 18 (5), 1077-1082, 2010 | 10 | 2010 |
Past-future Information Bottleneck for linear feedback systems N Amir, S Tiomkin, N Tishby 2015 54th IEEE Conference on Decision and Control (CDC), 5737-5742, 2015 | 8 | 2015 |
Predictive coding for boosting deep reinforcement learning with sparse rewards X Lu, S Tiomkin, P Abbeel arXiv preprint arXiv:1912.13414, 2019 | 7 | 2019 |
Learning efficient representation for intrinsic motivation R Zhao, S Tiomkin, P Abbeel arXiv preprint arXiv:1912.02624, 2019 | 6 | 2019 |
Preventing imitation learning with adversarial policy ensembles A Zhan, S Tiomkin, P Abbeel arXiv preprint arXiv:2002.01059, 2020 | 5 | 2020 |
Utilizing Prior Solutions for Reward Shaping and Composition in Entropy-Regularized Reinforcement Learning J Adamczyk, A Arriojas, S Tiomkin, RV Kulkarni Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23), 2022 | 4 | 2022 |
Cognitive workload and vocabulary sparseness: theory and practice. RM Hecht, A Bar-Hillel, S Tiomkin, H Levi, O Tsimhoni, N Tishby INTERSPEECH, 3394-3398, 2015 | 4 | 2015 |
Statistical text-to-speech synthesis with improved dynamics. S Tiomkin, D Malah INTERSPEECH, 1841-1844, 2008 | 4 | 2008 |
A segment-wise hybrid approach for improved quality text-to-speech synthesis S Tiomkin Technion-Israel Institute of Technology, Faculty of Electrical Engineering, 2009 | 3 | 2009 |
Bounding the optimal value function in compositional reinforcement learning J Adamczyk, V Makarenko, A Arriojas, S Tiomkin, RV Kulkarni Uncertainty in Artificial Intelligence, 22-32, 2023 | 2 | 2023 |
Entropy regularized reinforcement learning using large deviation theory A Arriojas, J Adamczyk, S Tiomkin, RV Kulkarni Physical Review Research 5 (2), 023085, 2023 | 2 | 2023 |
Compositionality and Bounds for Optimal Value Functions in Reinforcement Learning. J Adamczyk, S Tiomkin, RV Kulkarni arXiv preprint arXiv:2302.09676, 2023 | 1 | 2023 |
Multi-Objective Policy Gradients with Topological Constraints KH Wray, S Tiomkin, MJ Kochenderfer, P Abbeel IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS2022, 2022 | 1 | 2022 |