The Future of Mobile Typing

We can all recall a time when we’ve felt bizarre by how peculiar the features in our smartphones were. Whether it’s the unintentional humorous autocorrects or teasing from Siri, these tools have piqued our curiosity of how technology is capable of interpreting human language.


Natural language processing applications, such as machine translations, speech recognition, spelling corrections, and word predictions, are popular functions found in everyday technology used to interact between humans and computers. These applications are based off of the probability distribution over a sequence of words known as language models. In mobile keyboards, the language model is used to predict what the next character or word the user is going to use. It can also be used to correct misspelled and grammatically incorrect words.


The most common and basic language model used today is the n-gram model. The n-gram is a contiguous sequence of n objects, in our case “words”, from a section of text or speech. It calculates the probability of the next word based on the previous word used. The model is able to do so by looking at how frequent different word combinations occur from a database of words commonly referred to as a corpus. For example, if the model is given the word “ancient”, the word “history” is more likely to follow than the word “smartphone”.


In most cases, using the n-gram model to anticipate the next word is fairly accurate. However, predictions often only make sense with a limited number of preceding words and not with the entire clause.


An example of how the n-gram model can make illogical suggestions.


In the example above, using the predicted words “time”, “day”, or “night” would make sense only if the word “great” is taken into consideration. But because the n-gram model doesn’t look at the entire history, it can miss out on crucial knowledge. The clause ends up not making sense with any of the following predictions since San Francisco is a location and not based on time.


The neural language model (NLM), inspired by neurons from the human brain, addresses the n-gram model problem by computing probabilities for the next word based on the full history and meaning of previously used words. It will also correct orthographic errors to stay coherent with the rest of the sentence. The NLM has been tested and proven to deliver more accurate predictions than the n-gram model.


In the example below, the NLM has chosen the likelihood of the next word to be either “person”, “guy”, or “friend”. All three choices would make sense with both the preceding word “great” and the entire clause.

An example of how the NLM can accurately and logically predict the next word.


The NLM is trained on over a hundred million sentences to understand the semantic relationship between words. As a result, it can correctly match various words and phrases on its own. Even if certain phrases have never been seen by the NLM, it is still capable of making accurate predictions due to its developed knowledge of the human language.


The latest version of the TouchPal AI Keyboard is the first to combine artificial intelligence engine with an AI assistant — integrating accurate predictions, smart reply, and information recommendations on the same interface. By using cutting-edge deep learning technology, users can now experience better support when communicating with others. Download the TouchPal AI Keyboard today to try smart typing with endless fun, free on Google Play and the App Store.



Leave a Reply

Your email address will not be published. Required fields are marked *