关注
Seoyoung Ahn
Seoyoung Ahn
在 berkeley.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Predicting goal-directed human attention using inverse reinforcement learning
Z Yang, L Huang, Y Chen, Z Wei, S Ahn, G Zelinsky, D Samaras, M Hoai
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
922020
Benchmarking gaze prediction for categorical visual search
G Zelinsky, Z Yang, L Huang, Y Chen, S Ahn, Z Wei, H Adeli, D Samaras, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019
442019
Coco-search18 fixation dataset for predicting goal-directed attention control
Y Chen, Z Yang, S Ahn, D Samaras, M Hoai, G Zelinsky
Scientific reports 11 (1), 8776, 2021
352021
Towards predicting reading comprehension from gaze behavior
S Ahn, C Kelton, A Balasubramanian, G Zelinsky
ACM Symposium on Eye Tracking Research and Applications, 1-5, 2020
342020
Reading detection in real-time
C Kelton, Z Wei, S Ahn, A Balasubramanian, SR Das, D Samaras, ...
Proceedings of the 11th ACM Symposium on Eye Tracking Research …, 2019
252019
Predicting goal-directed attention control using inverse-reinforcement learning
GJ Zelinsky, Y Chen, S Ahn, H Adeli, Z Yang, L Huang, D Samaras, ...
Neurons, behavior, data analysis and theory 2021, 2021
192021
Gazeformer: Scalable, effective and fast prediction of goal-directed human attention
S Mondal, Z Yang, S Ahn, D Samaras, G Zelinsky, M Hoai
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
182023
Changing perspectives on goal-directed attention control: The past, present, and future of modeling fixations during visual search
GJ Zelinsky, Y Chen, S Ahn, H Adeli
Psychology of learning and motivation 73, 231-286, 2020
172020
Target-absent human attention
Z Yang, S Mondal, S Ahn, G Zelinsky, M Hoai, D Samaras
European Conference on Computer Vision, 52-68, 2022
132022
Use of superordinate labels yields more robust and human-like visual representations in convolutional neural networks
S Ahn, GJ Zelinsky, G Lupyan
Journal of Vision 21 (13), 13-13, 2021
132021
Predicting visual attention in graphic design documents
S Chakraborty, Z Wei, C Kelton, S Ahn, A Balasubramanian, GJ Zelinsky, ...
IEEE Transactions on Multimedia 25, 4478-4493, 2022
92022
Characterizing target-absent human attention
Y Chen, Z Yang, S Chakraborty, S Mondal, S Ahn, D Samaras, M Hoai, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
92022
A brain-inspired object-based attention network for multi-object recognition and visual reasoning
H Adeli, S Ahn, GJ Zelinsky
bioRxiv, 2022.04. 02.486850, 2022
62022
SP-EyeGAN: Generating Synthetic Eye Movement Data with Generative Adversarial Networks
P Prasse, DR Reich, S Makowski, S Ahn, T Scheffer, LA Jäger
Proceedings of the 2023 symposium on eye tracking research and applications, 1-9, 2023
42023
A brain-inspired object-based attention network for multiobject recognition and visual reasoning
H Adeli, S Ahn, GJ Zelinsky
Journal of Vision 23 (5), 16-16, 2023
42023
Affinity-based attention in self-supervised transformers predicts dynamics of object grouping in humans
H Adeli, S Ahn, N Kriegeskorte, G Zelinsky
arXiv preprint arXiv:2306.00294, 2023
32023
Recurrent attention models with object-centric capsule representation for multi-object recognition
H Adeli, S Ahn, G Zelinsky
arXiv preprint arXiv:2110.04954, 2021
32021
Predicting human attention using computational attention
Z Yang, S Mondal, S Ahn, G Zelinsky, M Hoai, D Samaras
arXiv preprint arXiv:2303.09383, 2023
22023
Reconstruction-guided attention improves the robustness and shape processing of neural networks
S Ahn, H Adeli, GJ Zelinsky
arXiv preprint arXiv:2209.13620, 2022
12022
The attentive reconstruction of objects facilitates robust object recognition
S Ahn, H Adeli, GJ Zelinsky
PLOS Computational Biology 20 (6), e1012159, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–20