## Description

We assume that you are familiar with the concepts of String Distance and String Similarities. You can also have a look at the Spelling Recommender. We will show how you can easily build a simple Autocorrect tool in Python with a few lines of code. What you will need is a corpus to build your vocabulary and the word frequencies. The idea is the following:

• You enter a word, if this is word exists in the vocabulary then we assume that is correct.
• If this word does not exist in the vocabulary we try to find the most similar words ordered by their frequency probability.

## Build the Vocabulary

We will work with the Moby Dick book. Let’s start.

```import pandas as pd
import numpy as np
import textdistance
import re
from collections import Counter

words = []

with open('moby.txt', 'r') as f:
file_name_data=file_name_data.lower()
words = re.findall('\w+',file_name_data)

# This is our vocabulary
V = set(words)

print(f"The first ten words in the text are: \n{words[0:10]}")
print(f"There are {len(V)} unique words in the vocabulary.")

```
``````The first ten words in the text are:
['moby', 'dick', 'by', 'herman', 'melville', '1851', 'etymology', 'supplied', 'by', 'a']
There are 17140 unique words in the vocabulary.``````

## Get the Word Frequencies

We have already built a list of words called `words` and now we can build our word frequency. We can use the Counter function.

```word_freq_dict = {}
word_freq_dict = Counter(words)

print(word_freq_dict.most_common()[0:10])

```
``[('the', 14431), ('of', 6609), ('and', 6430), ('a', 4736), ('to', 4625), ('in', 4172), ('that', 3085), ('his', 2530), ('it', 2522), ('i', 2127)]``

## Get the Relative Word Frequencies

Now we want to get the probability of each word to appear, this is equivalent to the relative word frequencies.

``` probs = {}

Total = sum(word_freq_dict.values())

for k in word_freq_dict.keys():
probs[k] = word_freq_dict[k]/Total

```

## Similarity based on Jaccard Distance and Q-Grams

We will sort the similar words based on Jaccard Distance by computing the 2 Q grams of the words. We will return the 5 most similar words order by `Similarity` and `Probability`.

```def my_autocorrect(input_word):
input_word = input_word.lower()

if input_word in V:
return('Your word seems to be correct')
else:
similarities = [1-(textdistance.Jaccard(qval=2).distance(v,input_word)) for v in word_freq_dict.keys()]
df = pd.DataFrame.from_dict(probs, orient='index').reset_index()
df = df.rename(columns={'index':'Word', 0:'Prob'})
df['Similarity'] = similarities
return(output)

```

Let’s see some examples:

#### Autocorrect `neverteless`:

```my_autocorrect('neverteless')

```

Autocorrect `nesseccary`:

```my_autocorrect('nesseccary')

```

Autocorrect `occurence`:

```my_autocorrect('occurence')
```

## Conclusion

We just represent one case where our vocabulary was taken from Moby Dick which for sure does not represent the actual relative frequencies of the English words, however, we did a relatively good job. Also, we used the Jaccard distance. You can try other distances like Cosine Distance, Edit Distance etc. You can also have a look at the documentation of the textdistance library.