## Introduction

Topic modelling is a method in natural language processing (NLP) used to train machine learning models. It refers to the process of logically selecting words that belong to a certain topic from within a document.

From a business standpoint, topic modelling provides great time and effort-saving benefits. For example, a company that wants to identify areas of improvement can run a survey asking users to rate their services and explain each rating. Topic modelling can fast-track this analysis by categorizing information into the topic â€˜the most common reasons for low ratingsâ€™.

**Techniques for topic modelling**

Topic modelling is about logically correlating several words. Say a telecom operator wants to identify whether poor network is a reason for low customer satisfaction. Here, â€˜bad networkâ€™ is the topic. The document is analyzed for words like â€˜badâ€™, â€˜slow speedâ€™, â€˜call not connectingâ€™, etc., which are more likely to describe network issues compared to common words like â€˜theâ€™ or â€˜andâ€™.

The three most common techniques of topic modelling are:

__Latent semantic analysis (LSA)__

Latent semantic analysis (LSA) aims to leverage the context around the words in order to capture hidden concepts or topics.

In this method, machines use Term Frequency-Inverse Document Frequency (TF-IDF) for analyzing documents. TF-IDF is a numerical statistic that reflects how important a word is to a document within a corpus.

Say there is a collection of â€˜mâ€™ text documents and each document has a total of â€˜nâ€™ unique words. The TF-IDF matrix â€“ m*n â€“ contains the TF-IDF scores for each word in the document. This matrix is then reduced to â€˜kâ€™ dimensions, k being the desired number of topics. The reduction is done using Singular Value Decomposition (SVD). The SVD matrix decomposes into three other matrices as shown in Figure 2.

This decomposition provides a vector representation of each word term in each document in the entire collection through the equation:

**A = USV ^{T}**

where,

oÂ Â Â A is the SVD matrix

oÂ Â Â U is the vector representation of the documents with vector length k

oÂ Â Â V is the vector representation of terms in the given document with length k

oÂ Â Â S represents the diagonal matrix of the singular topic frequency scores

oÂ Â Â T is a hyperparameter reflecting the number of topics

The SVD matrix can be used to find similar topics and documents using the cosine similarity method.

The main disadvantages of LSA are inefficient representation and non-interpretable embeddings. It also requires a large corpus to yield accurate results.

__Probabilistic Latent Semantic Analysis (pLSA)__

Probabilistic Latent Semantic Analysis (pLSA) was introduced to solve the representation challenge in LSA by replacing SVD with a probabilistic model. pLSA represents each entry in the TF-IDF matrix using probability.

The equation **P(D,W) = P(D) âˆ‘ P(Z|D)P(W|Z)** provides a joint probability suggesting how likely it is to find a certain word within a document based on the distribution of topics within it. Another parameterization â€“ **P(D,W) = âˆ‘P(Z)P(D|Z)P(W|Z)** â€“ denotes the probability that a document contains a given topic and that a word within the document belongs to the given topic. The parameterization closely resembles LSA.

__Latent Dirichlet Allocation (LDA)__

Latent Dirichlet Allocation (LDA) is a Bayesian version of pLSA. The core concept is replaced by Dirichlet allocations where the distribution is sampled over a probability simplex. A probability simplex represents a set of numbers that add up to 1. When the set comprises three numbers, it is called a three-dimensional Dirichlet distribution.

The total desired number of topics is set as â€˜kâ€™ in the dimensional Dirichlet distribution. The LDA model reads every document, assigns each word to one of the â€˜kâ€™ topics, and provides a representation of the words and documents for a given topic. As the assignment of topics is random, the representation is not optimal. Through different equations, the Latent Dirichlet Allocation model can provide the following results:

- Percentage of words within a given document that are assigned to a particular topic
- The number of times a particular word in all the documents has been assigned to a particular topic
- Movement of a word from topic A to topic B, i.e.,

**P (topic A | d) * P (w | topic A) < P (topic B | d) * P (w | topic B)**, where â€˜wâ€™ means word and â€˜dâ€™ means document

These results denote the optimal number of topics and the assignment of words to each topic. The model can learn from a given set of documents and its Dirichlet distribution and, later, predict topics for a new set of documents.

## How to use Latent Dirichlet Allocation

The LDA model is mainly used to classify document text under a particular topic. For each document, it builds a topic and includes relevant words, modeled as Dirichlet distributions. LDA is proven to deliver accurate results for topic modelling use cases. However, it first requires certain document modifications and pre-processing as explained below:

**Step 1: Choose the right library**

The following libraries are used depending on the task:

- Pandas â€“ For data manipulation and analysis
- Genism â€“ For document indexing and similarity retrieval in large documents
- pyLDAvis â€“ For interactive topic modelling visualization
- Parsivar and Hazm â€“ Act as an NLP library for Persian depending on the use case

**Step 2: Preprocess the data**

Before being consumed by the model, the document must be pre-treated through each of the following steps:

- Normalization â€“ Transform text to normal/canonical form
- Stemming â€“ Reduce a word to its word stem/root without suffixes and prefixes
- Stop word removal â€“ Remove words that do not add any logical meaning
- Tokenization â€“ Break text into â€˜tokensâ€™, i.e., words and phrases

This helps the model identify words and assign them to the correct topic.

### Step 3: Build the dictionary

The pre-processing output text is used to build the dictionary and corpus that, in turn, become inputs for the LDA model for topic modelling.

The number of topics (k) is set at 10. The rest of the parameters are set as â€˜defaultâ€™ for the first run.

### Step 4: Test for accuracy

A coherence matrix is used to test the model for accuracy. Topic coherence is a measure that compares different topic models based on their human-interpretability. The coherence score â€˜C_Vâ€™ provides a numerical value to the interpretability of the topics. In Figure 6, the coherence score is 0.52.

Companies should build many LDA models with different â€˜kâ€™ values. The correct model will have the highest coherence score. While choosing a high â€˜kâ€™ value can provide granular sub-topics, repetition of keywords indicates that the â€˜kâ€™ value is too large. Thus, the correct â€˜kâ€™ value will minimize rapid increase in topic coherence.

**Step 5: Visualize the topics**

The optimal number of topics can be better visualized through the interactive pyLDAvis tool. pyLDAvis displays the number of topics, overlapping topics, and all the words contained within a given topic (see Figure 8) for users to take necessary action.

### Step 6: Optimize the topics

In this step, hyperparameters are tuned to optimize the number of topics. Different combinations of alpha (document-topic density), k and beta (topic-word density) are run and their coherence scores are calculated. The combination with the highest coherence score is used to build the LDA model.

For the LDA model explained in this paper, the optimal parameters are:

- Number of topics = 6
- Alpha = asymmetric
- Beta/eta = 0.31

After configuring the model with these parameters, one can test whether it can predict topics for any unseen text document. The test results are visualized as follows:

Since this is a Dirichlet analysis, all the probabilities will add up to 1. Figure 9 shows that topic 1 has the highest value â€“ 0.66, i.e., the document has the highest probability of belonging to topic 1.

## Reading the LDA model results

The LDA model used in this paper has a coherence score of 0.52 (see Figure 6). This means that the model has performed reasonably well in topic modelling. If the coherence score is 0.85 and above, there is a high chance of overfitting in the model.

The LDA model does not identify the topic as a distinct word. Instead, it provides a probability of the most likely topic. Thus, users are required to determine a logical topic from the set of words provided by the model and map the topic numbers with the topic names identified by the user. This entire process is called â€˜topic classificationâ€™.

## Conclusion

LSA, probabilistic LSA and LDA are three common ways of topic modelling. Due to its ability to build valid dictionaries and use previous learnings to predict topics in new sets of documents, LDA is the recommended model for advanced topic modelling.

**Links/References: **