HomeBlogData ScienceNaive Bayes Classifiers: Examples, Models, & Types

Naive Bayes Classifiers: Examples, Models, & Types

Published
01st Jul, 2024
Views
view count loader
Read it in
12 Mins
In this article
    Naive Bayes Classifiers: Examples, Models, & Types

    Understanding machine learning algorithms in today's data driven world is crucial. Naive Bayes Classifiers are known for their simplicity, speed, and effectiveness, especially in real-time scenarios. Naive Bayes leverages Bayes' theorem and assumes feature independence for swift predictions.

    We'll explore its applications, including spam filtering and sentiment analysis, highlighting its strengths. However, it's essential to acknowledge limitations tied to the assumption of feature independence. Nonetheless, Naive Bayes remains a valuable tool, offering accurate outcomes with minimal training data. Join me as we learn the key aspects of Naive Bayes Classifiers in the professional field of machine learning. 

    What is Naive Bayes in Machine Learning? 

    Naive Bayes is a simple but surprisingly powerful probabilistic machine learning algorithm used for predictive modeling and classification tasks. Some typical applications of Naive Bayes are spam filtering, sentiment prediction, classification of documents, etc. It is a popular algorithm mainly because it can be easily written in code and predictions can be made real quick which in turn increases the scalability of the solution. The Naive Bayes algorithm is traditionally considered the algorithm of choice for practical-based applications mostly in cases where instantaneous responses are required for user requests.

    It is based on the works of the Rev. Thomas Bayes and hence the name. Before starting off with Naive Bayes, it is important to learn about Bayesian learning, what is ‘Conditional Probability' and ‘Bayes Rule’. Learners can enroll in Data Science courses in India and across the globe to learn more about the application of Bayes Theorem in ML projects.

    What is Naive Bayes in Machine Learning

    Bayesian learning is a supervised learning technique where the goal is to build a model of the distribution of class labels that have a concrete definition of the target attribute. Naïve Bayes is based on applying Bayes' theorem with the naïve assumption of independence between each and every pair of features.

    What is a Naive Bayes Classifier in Machine Learning?

    Naive Bayes classifiers are a popular machine learning method for sorting things into categories. They work by assuming features (like X, Y) affecting an outcome (Z) are independent of each other, given Z is known. 

    This simplifies calculations but might not always be true (hence the "naive" in the name). Despite this, Naive Bayes can be surprisingly effective for many classification tasks.

                    n          
    P(X₁...Xₙ|Y) = π P(Xᵢ|Y)
           i=1

    In the mathematical expression, X represents the attributes, Y represents the response variable. So, P(X|Y) becomes equal to the product of the probability distribution of each attribute given Y.

    A. Maximizing a Posteriori

    If you want to find the posterior probability of P(Y|X) for multiple values of Y, you need to calculate the expression for all the different values of Y. 

    Let us assume a new instance variable X_NEW. You need to calculate the probability that Y will take any value given the observed attributes of X_NEW and given the distributions P(Y) and P(X|Y) which are estimated from the training dataset. 

    In order to predict the response variable depending on the different values obtained for P(Y|X), you need to consider a probable value or the maximum of the values. Hence, this method is known as maximizing a posteriori.

    B. Maximizing Likelihood

    You can simplify the Naive Bayes algorithm if you assume that the response variable is uniformly distributed which means that it is equally likely to get any response. The advantage of this assumption is that the a priori or the P(Y) becomes a constant value. 

    Since the a priori and the evidence become independent from the response variable, they can be removed from the equation. So, maximizing the posteriori becomes maximizing the likelihood problem. You can solve similar machine learning problems and apply Bayes theorem in data science with python.

    What is Bayes' Theorem?

    Bayes' Theorem helps you examine the probability of an event based on the prior knowledge of any event that has correspondence to the former event. Its uses are mainly found in probability theory and statistics. The term naive is used in the sense that the features given to the model are not dependent on each other. In simple terms, if you change the value of one feature in the algorithm, it will not directly influence or change the value of the other features.

    Consider, for example the probability that the price of a house is high can be calculated better if we have some prior information, like the facilities around it, compared to another assessment made without the knowledge of the location of the house. 

    P(A|B) = [P(B|A)P(A)]/[P(B)]

    The equation above shows the basic representation of Bayes' theorem where A and B are two events and:

    • P(A|B): The conditional probability that event A occurs, given that B has occurred. This is termed as the posterior probability. 
    • P(A) and P(B): The probability of A and B without any correspondence with each other. 
    • P(B|A):  The conditional probability of the occurrence of event B, given that A has occurred.

    Now the question is how you can use naive Bayes in machine learning. To understand it clearly, let us take an example.

    Types of Naive Bayes Classifier

    The main types of Naive Bayes classifier algorithm are mentioned below:

    1. Multinomial Naive Bayes 
    2. Complement Naive Bayes
    3. Bernoulli Naive Bayes
    4. Out-of-Core Naive Bayes
    5. Gaussian Naive Bayes

    Multinomial Naive Bayes: These types of classifiers are usually used for the problems of document classification.  It checks whether the document belongs to a particular category like sports or technology or political etc and then classifies them accordingly. The predictors used for classification in this technique are the frequency of words present in the document. 

    Complement Naive Bayes: This is basically an adaptation of the multinomial naive bayes that is particularly suited for imbalanced datasets.  

    Bernoulli Naive Bayes: This classifier is also analogous to multinomial naive bayes but instead of words, the predictors are Boolean values. The parameters used to predict the class variable accepts only yes or no values, for example, if a word occurs in the text or not. 

    Out-of-Core Naive Bayes: This classifier is used to handle cases of large scale classification problems for which the complete training dataset might not fit in the memory. 

    Gaussian Naive Bayes: In a Gaussian Naive Bayes, the predictors take a continuous value assuming that it has been sampled from a Gaussian Distribution. It is also called a Normal Distribution.

    What are the types of Naive Bayes classifier

    Since the likelihood of the features is assumed to be Gaussian, the conditional probability will change in the following manner:

    P(xᵢ|y) = 1/(√2пσ²ᵧ) exp[ –(xᵢ - μᵧ)²/2σ²ᵧ]

    Naive Bayes Algorithm in Machine Learning

    Consider a simple problem where you need to learn a machine learning model from a given set of attributes. Then you will have to describe a hypothesis or a relation to a response variable and then using this relation, you will have to predict a response, given the set of attributes you have. 

    You can create a learner using Bayes' Theorem that can predict the probability of the response variable that will belong to the same class, given a new set of attributes. 

    Consider the previous question again and then assume that A is the response variable and B is the given attribute. So according to the equation of Bayes' Theorem, we have:

    • P(A|B): The conditional probability of the response variable that belongs to a particular value, given the input attributes, also known as the posterior probability.
    • P(A): The prior probability of the response variable.
    • P(B): The probability of training data(input attributes) or the evidence.
    • P(B|A): This is termed as the likelihood of the training data.

    The Bayes' Theorem can be reformulated in correspondence with the machine learning algorithm as:

    posterior = (prior x likelihood) / (evidence)

    Let’s look into another problem. Consider a situation where the number of attributes is n, and the response is a Boolean value. i.e. Either True or False. The attributes are categorical (2 categories in this case). You need to train the classifier for all the values in the instance and the response space.

    This example is practically not possible in most machine learning algorithms since you need to compute 2∗(2^n-1) parameters for learning this model.  This means for 30 boolean attributes; you will need to learn more than 3 billion parameters which is unrealistic.

    How to Make Predictions with a Naive Bayes Model?

    Consider a situation where you have 1000 fruits which are either ‘banana’ or ‘apple’ or ‘other’. These will be the possible classes of the variable Y.

    The data for the following X variables all of which are in binary (0 and 1):

    • Long 
    • Sweet
    • Yellow

    The training dataset will look like this:

    FruitLong (x1)Sweet (x2)Yellow (x3)
    Apple001
    Banana101
    Apple010
    Other111
    ........

    Now let us sum up the training dataset to form a count table as below:

    TypeLongNot LongSweetNot sweetYellowNot YellowTotal
    Banana40010035015045050500
    Apple03001501503000300
    Other1001001505050150200
    Total5005006503508002001000

    The main agenda of the classifier is to predict if a given fruit is a ‘Banana’ or an ‘Apple’ or ‘Other’ when the three attributes(long, sweet and yellow) are known.

    Consider a case where you’re given that a fruit is long, sweet and yellow and you need to predict what type of fruit it is. This case is similar to the case where you need to predict Y only when the X attributes in the training dataset are known. You can easily solve this problem by using Naive Bayes.

    The thing you need to do is to compute the 3 probabilities i.e., the probability of being a banana or an apple or other. The one with the highest probability will be your answer. 

    Step 1: First of all, you need to compute the proportion of each fruit class out of all the fruits from the population which is the prior probability of each fruit class. 

    The Prior probability can be calculated from the training dataset:

    P(Y=Banana) = 500 / 1000 = 0.50

    P(Y=Apple) = 300 / 1000 = 0.30

    P(Y=Other) = 200 / 1000 = 0.20

    The training dataset contains 1000 records. Out of which, you have 500 bananas, 300 apples and 200 others. So the priors are 0.5, 0.3 and 0.2 respectively. 

    Step 2: Secondly, you need to calculate the probability of evidence that goes into the denominator. It is simply the product of P of X’s for all X:

    P(x1=Long) = 500 / 1000 = 0.50

    P(x2=Sweet) = 650 / 1000 = 0.65

    P(x3=Yellow) = 800 / 1000 = 0.80

    Step 3: The third step is to compute the probability of likelihood of evidence which is nothing but the product of conditional probabilities of the 3 attributes. 

    The Probability of Likelihood for Banana:

    P(x1=Long | Y=Banana) = 400 / 500 = 0.80

    P(x2=Sweet | Y=Banana) = 350 / 500 = 0.70

    P(x3=Yellow | Y=Banana) = 450 / 500 = 0.90

    Therefore, the overall probability of likelihood for banana will be the product of the above three,i.e. 0.8 * 0.7 * 0.9 = 0.504.

    Step 4: The last step is to substitute all the 3 equations into the mathematical expression of Naive Bayes in machine learning to get the probability.

    P(Banana|Long,Sweet and Yellow)  =   [P(Long|Banana)∗P(Sweet|Banana)∗P(Yellow|Banana) x P(Banana)] /                              [P(Long)∗P(Sweet)∗P(Yellow)]
    =  0.8∗0.7∗0.9∗0.5/[P(Evidence)] = 0.252/[P(Evidence)]

    P(Apple|Long,Sweet and Yellow) = 0, because P(Long|Apple) = 0

    P(Other|Long,Sweet and Yellow) = 0.01875/P(Evidence)

    In a similar way, you can also compute the probabilities for ‘Apple’ and ‘Other’. The denominator is the same for all cases. 

    Banana gets the highest probability, so that will be considered as the predicted class. Continue reading the blog to understand more about the naive Bayes algorithm in machine learning.

    Pros and Cons of the Naive Bayes Classifiers

    The Naive Bayes algorithm has both its pros and its cons. 

    A. Pros of Naive Bayes:

    • It is easy and fast to predict the class of the training data set. 
    • It performs well in multiclass prediction.
    • It performs better as compared to other models like logistic regression while assuming the independent variables.
    • It requires less training data. 
    • It performs better in the case of categorical input variables as compared to numerical variables.

    B. Cons of Naive Bayes:

    • The model is not able to make a prediction in situations where the categorical variable has a category that was not observed in the training data set and assigns a 0 (zero) probability to it. This is known as the ‘Zero Frequency’. You can solve this using the Laplace estimation.
    • Since Naive Bayes is considered to be a bad estimator, the probability outputs are not taken seriously.
    • Naive Bayes works on the principle of assumption of independent predictors, but it is practically impossible to get a set of predictors that are completely independent.

    Applications of Naive Bayes Classifiers

    There are a lot of real-life applications of the Naive Bayes classifier, some of which are mentioned below:

    • Real-time prediction: It is a fast and eager machine learning classifier, so it is used for making predictions in real-time. 
    • Multi-class prediction: It can predict the probability of multiple classes of the target variable. 
    • Text Classification/ Spam Filtering / Sentiment Analysis: They are mostly used in text classification problems because of its multi-class problems and the independence rule. They are used for identifying spam emails and also to identify negative and positive customer sentiments on social platforms. 
    • Recommendation Systems: A Recommendation system is built by combining Naive Bayes classifier and Collaborating Filtering. It filters unseen information and predicts whether the user would like a given resource or not using machine learning and data mining techniques. 

    How to Build a Naive Bayes Classifier in Python?

    In Python, the Naive Bayes classifier is implemented in the scikit-learn library. Let us look into an example by importing the standard iris dataset to predict the Species of flowers:

    # Import packages
    from sklearn.naive_bayes import GaussianNB
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import confusion_matrix
    import numpy as np
    import pandas as pd
    import matplotlib.pyplot as plt
    import seaborn as sns; sns.set()
    
    # Import data
    training = pd.read_csv('/content/iris_training.csv')
    test = pd.read_csv('/content/iris_test.csv')
    
    # Create the X, Y, Training and Test
    X_Train = training.drop('Species', axis=1)
    Y_Train = training.loc[:, 'Species']
    X_Test = test.drop('Species', axis=1)
    Y_Test = test.loc[:, 'Species']
    
    # Init the Gaussian Classifier
    model = GaussianNB()
    
    # Train the model
    model.fit(X_Train, Y_Train)
    
    # Predict Output
    pred = model.predict(X_Test)
    
    # Plot Confusion Matrix
    mat = confusion_matrix(pred, Y_Test)
    names = np.unique(pred)
    sns.heatmap(mat, square=True, annot=True, fmt='d', cbar=False,
            xticklabels=names, yticklabels=names)
    plt.xlabel('Truth')
    plt.ylabel('Predicted')

    The output will be as follows:

    Text(89.18, 0.5, 'Predicted')

    How to build a Naive Bayes Classifier in Python

    How to Improve a Naive Bayes Model?

    You can improve the power of a Naive Bayes model by following these tips:

    1. Transform variables using transformations like BoxCox and YeoJohnson to make continuous features to normal distribution.
    2. Use Laplace Correction for handling zero values in X variables and to predict the class of test data set for zero frequency issues. 
    3. Check for correlated features and remove the highly correlated ones because they are voted twice in the model and might lead to over inflation.
    4. Combine different features to make a new product that makes some intuitive sense. 
    5. Provide more realistic prior probabilities to the algorithm based on knowledge from business. Use ensemble methods like bagging and boosting to reduce the variance. 

    Unlock your business potential with ccba classes. Gain the skills and knowledge to excel in the ever-evolving business world. Enroll today!

    Summary

    Let's review what we've covered:

    I've talked about Naive Bayes Classifiers and its types, discussed the pros and cons, and explored where it's applied, like sentiment analysis and spam filtering. I explained how a Naive Bayes Classifier predicts outcomes and walked through creating and improving one. Naive Bayes is handy in real-world tasks due to its speed and simplicity, but there's a catch – it works best when predictors are independent. In real-life situations, though, predictors often depend on each other, which can impact the classifier's performance. Nonetheless, Naive Bayes remains a popular choice for quick and straightforward solutions in machine learning.

    We have covered most of the topics related to algorithms in our machine learning blogs. If you are inspired by the opportunities provided by machine learning, Enroll in KnowledgeHut Data Science with Python for more lucrative career options in this landscape.

    Frequently Asked Questions (FAQs)

    1What is the benefit of Naive Bayes in machine learning?
    • High scalability 
    • Gives more accuracy in less amount of data 
    • Less training time. 
    • Provides partial_fit mechanism while training the model with a large amount of data. 
    • Considering each feature as an independent entity gives more accuracy.  
    2Why do we use a naive Bayes algorithm?
    • It assumes that the value of a particular feature is independent of the value of any other feature, given the class variable. For example, a fruit may be considered to be an apple if it is red, round, and about 10 cm in diameter.  
    • A naive Bayes classifier considers each of these features to contribute independently to the probability that this fruit is an apple, regardless of any possible correlations between the color, roundness, and diameter features. It uses the method of maximum likelihood; Despite its design and apparently oversimplified assumptions.  
    • Naive Bayes classifiers have worked quite well in many complex real-world situations. An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification.  
    • It is very fast and scalable. So many real-time prediction applications use it. 
    3What is the difference between Bayes and Naive Bayes? 

    Naive Bayes uses the Bayes theorem as the base of the algorithm. It is an inspiration for it. Bayes's theorem is not made for classification or telling about the classification. Naive is a collection of elements in algorithms used for classification. Bayes's theorem is one of them. 

    By fighting the dataset into the Bayes theorem, the Naive Bayes algorithm is prepared with the highest probability that why we used argmax. And following the equation, we will get 

    y= argmaxy P(Y) 
    ∏ni=1∏i=1n
    P( Xi / y)   


    4Is naive Bayes classification or regression? 

    It is basically a classification algorithm. It trained to identify categories and predict in which category they fall for new values. But it strongly depended on assumptions by doing some changes in that you can use it as regression. for more details please refer to the  (PDF) Naive Bayes for Regression (researchgate.net).  

    Profile

    Ashish Gulati

    Data Science Expert

    Ashish is a techology consultant with 13+ years of experience and specializes in Data Science, the Python ecosystem and Django, DevOps and automation. He specializes in the design and delivery of key, impactful programs.

    Share This Article
    Ready to Master the Skills that Drive Your Career?

    Avail your free 1:1 mentorship session.

    Select
    Your Message (Optional)

    Upcoming Data Science Batches & Dates

    NameDateFeeKnow more
    Course advisor icon
    Course Advisor
    Whatsapp/Chat icon