The year 2011 marked an important transition for FPGA high-level synthesis (HLS), as it went from prototyping to deployment. A decade later, in this article, we assess the progress …
To address one of the most challenging industry problems, we develop an enhanced training algorithm for anomaly detection in unlabelled sequential data such as time-series …
Deep neural networks have proven to be particularly effective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory …
S Zheng, Y Liang, S Wang, R Chen… - Proceedings of the Twenty …, 2020 - dl.acm.org
Tensor computation plays a paramount role in a broad range of domains, including machine learning, data analytics, and scientific computing. The wide adoption of tensor computation …
In natural language processing (NLP), the" Transformer" architecture was proposed as the first transduction model replying entirely on self-attention mechanisms without using …
Neural networks based on Long Short-Term Memory (LSTM) are widely deployed in latency- sensitive language and speech applications. To speed up LSTM inference, previous …
M Wang, S Lu, D Zhu, J Lin… - 2018 IEEE asia pacific …, 2018 - ieeexplore.ieee.org
Recently, significant improvement has been achieved for hardware architecture design of deep neural networks (DNNs). However, the hardware implementation of one widely used …
Although Transformer-based language representations achieve state-of-the-art accuracy on various natural language processing (NLP) tasks, the large model size has been …
R Wu, X Guo, J Du, J Li - Electronics, 2021 - mdpi.com
The breakthrough of deep learning has started a technological revolution in various areas such as object identification, image/video recognition and semantic segmentation. Neural …