Multimodal tongue drive system (mTDS) is an assistive technology, which uses speech, tongue, and head movements of people with tetraplegia to help them control devices such as wheelchairs, PCs, or smartphones. In order to process tongue gestures, it is required to solve the magnetic dipole equation, which is computationally expensive to execute in a wearable embedded hardware. In this paper, a support vector machine with linear kernel-based algorithm is proposed, which uses an additional magnetometer that has 96.1% accuracy with a 2.2% sensitivity and 1.8% specificity among 15 participants, is computationally efficient for mTDS, generates high-performance classification results despite small changes of magnet positions (-5 to 5 mm) and orientations (-10° to 10°) from a four-dimensional robot emulated tongue movement data compared to five other algorithms. This algorithm generates more accurate results to control devices independently despite the Midas-touch problem between speech and tongue movements.