关注
Leonid Schwenke
标题
引用次数
引用次数
年份
Show me what you’re looking for: visualizing abstracted transformer attention for enhancing their local interpretability on time series data
L Schwenke, M Atzmueller
The International FLAIRS Conference Proceedings 34, 2021
182021
Constructing global coherence representations: Identifying interpretability and coherences of transformer attention in time series data
L Schwenke, M Atzmueller
2021 IEEE 8th International Conference on Data Science and Advanced …, 2021
92021
Abstracting Local Transformer Attention for Enhancing Interpretability on Time Series Data.
L Schwenke, M Atzmueller
LWDA, 205-218, 2021
62021
Monitoring Android devices by using events and metadata
M Schölzel, E Eren, KO Detken, L Schwenke
International Journal of Computing 15 (4), 248-258, 2016
62016
Identifying Informative Nodes in Attributed Spatial Sensor Networks Using Attention for Symbolic Abstraction in a GNN-based Modeling Approach
L Schwenke, S Bloemheuvel, M Atzmueller
The International FLAIRS Conference Proceedings 36, 2023
42023
Using brain activity patterns to differentiate real and virtual attended targets during augmented reality scenarios
LM Vortmann, L Schwenke, F Putze
Information 12 (6), 226, 2021
32021
Making time series embeddings more interpretable in deep learning: Extracting higher-level features via symbolic approximation representations
L Schwenke, M Atzmueller
The International FLAIRS Conference Proceedings 36, 2023
12023
Real or virtual? Using brain activity patterns to differentiate attended targets during augmented reality scenarios
LM Vortmann, L Schwenke, F Putze
arXiv preprint arXiv:2101.05272, 2021
12021
Extracting Interpretable Local and Global Representations from Attention on Time Series
L Schwenke, M Atzmueller
arXiv preprint arXiv:2312.11466, 2023
2023
Knowledge-Augmented Induction of Complex Networks on Supply–Demand–Material Data
D Hudson, L Schwenke, S Bloemheuvel, AG Chowdhury, N Schut, ...
CEUR Workshop Proceedings, 2021
2021
系统目前无法执行此操作,请稍后再试。
文章 1–10