DRG-Keyboard: Enabling subtle gesture typing on the fingertip with dual IMU rings

C Liang, C Hsia, C Yu, Y Yan, Y Wang… - Proceedings of the ACM on …, 2023 - dl.acm.org
We present DRG-Keyboard, a gesture keyboard enabled by dual IMU rings, allowing the
user to swipe the thumb on the index fingertip to perform word gesture typing as if typing on …

Enabling voice-accompanying hand-to-face gesture recognition with cross-device sensing

Z Li, C Liang, Y Wang, Y Qin, C Yu, Y Yan… - Proceedings of the …, 2023 - dl.acm.org
Gestures performed accompanying the voice are essential for voice interaction to convey
complementary semantics for interaction purposes such as wake-up state and input …

Facesight: Enabling hand-to-face gesture interaction on ar glasses with a downward-facing camera vision

Y Weng, C Yu, Y Shi, Y Zhao, Y Yan, Y Shi - Proceedings of the 2021 …, 2021 - dl.acm.org
We present FaceSight, a computer vision-based hand-to-face gesture sensing technique for
AR glasses. FaceSight fixes an infrared camera onto the bridge of AR glasses to provide …

Sensing to hear through memory: Ultrasound speech enhancement without real ultrasound signals

Q Zhang, K Liu, D Wang - Proceedings of the ACM on Interactive, Mobile …, 2024 - dl.acm.org
Speech enhancement on mobile devices is a very challenging task due to the complex
environmental noises. Recent works using lip-induced ultrasound signals for speech …

Lipwatch: Enabling Silent Speech Recognition on Smartwatches using Acoustic Sensing

Q Zhang, Y Lan, K Guo, D Wang - Proceedings of the ACM on Interactive …, 2024 - dl.acm.org
Silent Speech Interfaces (SSI) on mobile devices offer a privacy-friendly alternative to
conventional voice input methods. Previous research has primarily focused on smartphones …

Conespeech: Exploring directional speech interaction for multi-person remote communication in virtual reality

Y Yan, H Liu, Y Shi, J Wang, R Guo, Z Li… - … on Visualization and …, 2023 - ieeexplore.ieee.org
Remote communication is essential for efficient collaboration among people at different
locations. We present ConeSpeech, a virtual reality (VR) based multi-user remote …

Efficient multimodal neural networks for trigger-less voice assistants

SS Buddi, UO Sarawgi, T Heeramun… - arXiv preprint arXiv …, 2023 - arxiv.org
The adoption of multimodal interactions by Voice Assistants (VAs) is growing rapidly to
enhance human-computer interactions. Smartwatches have now incorporated trigger-less …

Morse wavelet transform-based features for voice liveness detection

P Gupta, HA Patil - Computer Speech & Language, 2024 - Elsevier
Abstract The need for Voice Liveness Detection (VLD) has emerged particularly for the
security of Automatic Speaker Verification (ASV) systems. Existing Spoofed Speech …

Selecting Real-World Objects via User-Perspective Phone Occlusion

Y Qin, C Yu, W Yao, J Yao, C Liang, Y Weng… - Proceedings of the …, 2023 - dl.acm.org
Perceiving the region of interest (ROI) and target object by smartphones from the user's first-
person perspective can enable diverse spatial interactions. In this paper, we propose a …

The user's psychological state identification based on Big Data analysis for person's electronic diary

A Dyriv, V Andrunyk, Y Burov, I Karpov… - 2021 IEEE 16th …, 2021 - ieeexplore.ieee.org
This paper considers the development of a software application of the electronic diary. An
analysis of the psychological state is carried out based on notes that the user makes daily …