Using a Pre-trained Neural Language Model to Make Character Predictions for Brain-Computer Interfaces Dylan Gaines, Keith Vertanen Proceedings of the 10th International Brain-Computer Interface Meeting, 2023. Because the signals acquired from a BCI user can be quite noisy, often a language model is used to assist the software in selecting the user's intended character. While traditional n-gram language models can provide adequate predictions and can be queried quite efficiently once trained, state-of-the-art in natural language processing typically uses a large pre-trained neural network language model such as GPT-2. In this work we explore some of the speed and accuracy tradeoffs between these two models.
|