Search

Machine learning Filter

What is LDA: Linear Discriminant Analysis for Machine Learning

Linear Discriminant Analysis or LDA is a dimensionality reduction technique. It is used as a pre-processing step in Machine Learning and applications of pattern classification. The goal of LDA is to project the features in higher dimensional space onto a lower-dimensional space in order to avoid the curse of dimensionality and also reduce resources and dimensional costs.The original technique was developed in the year 1936 by Ronald A. Fisher and was named Linear Discriminant or Fisher's Discriminant Analysis. The original Linear Discriminant was described as a two-class technique. The multi-class version was later generalized by C.R Rao as Multiple Discriminant Analysis. They are all simply referred to as the Linear Discriminant Analysis.LDA is a supervised classification technique that is considered a part of crafting competitive machine learning models. This category of dimensionality reduction is used in areas like image recognition and predictive analysis in marketing.What is Dimensionality Reduction?The techniques of dimensionality reduction are important in applications of Machine Learning, Data Mining, Bioinformatics, and Information Retrieval. The main agenda is to remove the redundant and dependent features by changing the dataset onto a lower-dimensional space.In simple terms, they reduce the dimensions (i.e. variables) in a particular dataset while retaining most of the data.Multi-dimensional data comprises multiple features having a correlation with one another. You can plot multi-dimensional data in just 2 or 3 dimensions with dimensionality reduction. It allows the data to be presented in an explicit manner which can be easily understood by a layman.What are the limitations of Logistic Regression?Logistic Regression is a simple and powerful linear classification algorithm. However, it has some disadvantages which have led to alternate classification algorithms like LDA. Some of the limitations of Logistic Regression are as follows:Two-class problems – Logistic Regression is traditionally used for two-class and binary classification problems. Though it can be extrapolated and used in multi-class classification, this is rarely performed. On the other hand, Linear Discriminant Analysis is considered a better choice whenever multi-class classification is required and in the case of binary classifications, both logistic regression and LDA are applied.Unstable with Well-Separated classes – Logistic Regression can lack stability when the classes are well-separated. This is where LDA comes in.Unstable with few examples – If there are few examples from which the parameters are to be estimated, logistic regression becomes unstable. However, Linear Discriminant Analysis is a better option because it tends to be stable even in such cases.How to have a practical approach to an LDA model?Consider a situation where you have plotted the relationship between two variables where each color represents a different class. One is shown with a red color and the other with blue.If you are willing to reduce the number of dimensions to 1, you can just project everything to the x-axis as shown below: This approach neglects any helpful information provided by the second feature. However, you can use LDA to plot it. The advantage of LDA is that it uses information from both the features to create a new axis which in turn minimizes the variance and maximizes the class distance of the two variables.How does LDA work?LDA focuses primarily on projecting the features in higher dimension space to lower dimensions. You can achieve this in three steps:Firstly, you need to calculate the separability between classes which is the distance between the mean of different classes. This is called the between-class variance.Secondly, calculate the distance between the mean and sample of each class. It is also called the within-class variance.Finally, construct the lower-dimensional space which maximizes the between-class variance and minimizes the within-class variance. P is considered as the lower-dimensional space projection, also called Fisher’s criterion.How are LDA models represented?The representation of LDA is pretty straight-forward. The model consists of the statistical properties of your data that has been calculated for each class. The same properties are calculated over the multivariate Gaussian in the case of multiple variables. The multivariates are means and covariate matrix.Predictions are made by providing the statistical properties into the LDA equation. The properties are estimated from your data. Finally, the model values are saved to file to create the LDA model.How do LDA models learn?The assumptions made by an LDA model about your data:Each variable in the data is shaped in the form of a bell curve when plotted,i.e. Gaussian.The values of each variable vary around the mean by the same amount on the average,i.e. each attribute has the same variance.The LDA model is able to estimate the mean and variance from your data for each class with the help of these assumptions.The mean value of each input for each of the classes can be calculated by dividing the sum of values by the total number of values:Mean =Sum(x)/Nkwhere Mean = mean value of x for class           N = number of           k = number of           Sum(x) = sum of values of each input x.The variance is computed across all the classes as the average of the square of the difference of each value from the mean:Σ²=Sum((x - M)²)/(N - k)where  Σ² = Variance across all inputs x.            N = number of instances.            k = number of classes.            Sum((x - M)²) = Sum of values of all (x - M)².            M = mean for input x.How does an LDA model make predictions?LDA models use Bayes’ Theorem to estimate probabilities. They make predictions based upon the probability that a new input dataset belongs to each class. The class which has the highest probability is considered the output class and then the LDA makes a prediction.  The prediction is made simply by the use of Bayes’ Theorem which estimates the probability of the output class given the input. They also make use of the probability of each class and the probability of the data belonging to each class:P(Y=x|X=x)  = [(Plk * fk(x))] / [(sum(PlI * fl(x))]Where x = input.            k = output class.            Plk = Nk/n or base probability of each class observed in the training data. It is also called prior probability in Bayes’ Theorem.            fk(x) = estimated probability of x belonging to class k.The f(x) is plotted using a Gaussian Distribution function and then it is plugged into the equation above and the result we get is the equation as follows:Dk(x) = x∗(mean/Σ²) – (mean²/(2*Σ²)) + ln(PIk)The Dk(x) is called the discriminant function for class k given input x, mean,  Σ² and Plk are all estimated from the data and the class is calculated as having the largest value, will be considered in the output classification.  How to prepare data from LDA?Some suggestions you should keep in mind while preparing your data to build your LDA model:LDA is mainly used in classification problems where you have a categorical output variable. It allows both binary classification and multi-class classification.The standard LDA model makes use of the Gaussian Distribution of the input variables. You should check the univariate distributions of each attribute and transform them into a more Gaussian-looking distribution. For example, for the exponential distribution, use log and root function and for skewed distributions use BoxCox.Outliers can skew the primitive statistics used to separate classes in LDA, so it is preferable to remove them.Since LDA assumes that each input variable has the same variance, it is always better to standardize your data before using an LDA model. Keep the mean to be 0 and the standard deviation to be 1.How to implement an LDA model from scratch?You can implement a Linear Discriminant Analysis model from scratch using Python. Let’s start by importing the libraries that are required for the model:from sklearn.datasets import load_wine import pandas as pd import numpy as np np.set_printoptions(precision=4) from matplotlib import pyplot as plt import seaborn as sns sns.set() from sklearn.preprocessing import LabelEncoder from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrixSince we will work with the wine dataset, you can obtain it from the UCI machine learning repository. The scikit-learn library in Python provides a wrapper function for downloading it:wine_info = load_wine() X = pd.DataFrame(wine_info.data, columns=wine_info.feature_names) y = pd.Categorical.from_codes(wine_info.target, wine_info.target_names)The wine dataset comprises of 178 rows of 13 columns each:X.shape(178, 13)The attributes of the wine dataset comprise of various characteristics such as alcohol content of the wine, magnesium content, color intensity, hue and many more:X.head()The wine dataset contains three different kinds of wine:wine_info.target_names array(['class_0', 'class_1', 'class_2'], dtype='<U7')Now we create a DataFrame which will contain both the features and the content of the dataset:df = X.join(pd.Series(y, name='class'))We can divide the process of Linear Discriminant Analysis into 5 steps as follows:Step 1 - Computing the within-class and between-class scatter matrices.Step 2 - Computing the eigenvectors and their corresponding eigenvalues for the scatter matrices.Step 3 - Sorting the eigenvalues and selecting the top k.Step 4 - Creating a new matrix that will contain the eigenvectors mapped to the k eigenvalues.Step 5 - Obtaining new features by taking the dot product of the data and the matrix from Step 4.Within-class scatter matrixTo calculate the within-class scatter matrix, you can use the following mathematical expression:where, c = total number of distinct classes andwhere, x = a sample (i.e. a row).            n = total number of samples within a given class.Now we create a vector with the mean values of each feature:feature_means1 = pd.DataFrame(columns=wine_info.target_names) for c, rows in df.groupby('class'): feature_means1[c] = rows.mean() feature_means1The mean vectors (mi ) are now plugged into the above equations to obtain the within-class scatter matrix:withinclass_scatter_matrix = np.zeros((13,13)) for c, rows in df.groupby('class'): rows = rows.drop(['class'], axis=1) s = np.zeros((13,13)) for index, row in rows.iterrows(): x, mc = row.values.reshape(13,1), feature_means1[c].values.reshape(13,1) s += (x - mc).dot((x - mc).T) withinclass_scatter_matrix += sBetween-class scatter matrixWe can calculate the between-class scatter matrix using the following mathematical expression:where,andfeature_means2 = df.mean() betweenclass_scatter_matrix = np.zeros((13,13)) for c in feature_means1:        n = len(df.loc[df['class'] == c].index)    mc, m = feature_means1[c].values.reshape(13,1), feature_means2.values.reshape(13,1) betweenclass_scatter_matrix += n * (mc - m).dot((mc - m).T)Now we will solve the generalized eigenvalue problem to obtain the linear discriminants for:eigen_values, eigen_vectors = np.linalg.eig(np.linalg.inv(withinclass_scatter_matrix).dot(betweenclass_scatter_matrix))We will sort the eigenvalues from the highest to the lowest since the eigenvalues with the highest values carry the most information about the distribution of data is done. Next, we will first k eigenvectors. Finally, we will place the eigenvalues in a temporary array to make sure the eigenvalues map to the same eigenvectors after the sorting is done:eigen_pairs = [(np.abs(eigen_values[i]), eigen_vectors[:,i]) for i in range(len(eigen_values))] eigen_pairs = sorted(eigen_pairs, key=lambda x: x[0], reverse=True) for pair in eigen_pairs: print(pair[0])237.46123198302251 46.98285938758684 1.4317197551638386e-14 1.2141209883217706e-14 1.2141209883217706e-14 8.279823065850476e-15 7.105427357601002e-15 6.0293733655173466e-15 6.0293733655173466e-15 4.737608877108813e-15 4.737608877108813e-15 2.4737196789039026e-15 9.84629525010022e-16Now we will transform the values into percentage since it is difficult to understand how much of the variance is explained by each component.sum_of_eigen_values = sum(eigen_values) print('Explained Variance') for i, pair in enumerate(eigen_pairs):    print('Eigenvector {}: {}'.format(i, (pair[0]/sum_of_eigen_values).real))Explained Variance Eigenvector 0: 0.8348256799387275 Eigenvector 1: 0.1651743200612724 Eigenvector 2: 5.033396012077518e-17 Eigenvector 3: 4.268399397827047e-17 Eigenvector 4: 4.268399397827047e-17 Eigenvector 5: 2.9108789097898625e-17 Eigenvector 6: 2.498004906118145e-17 Eigenvector 7: 2.119704204950956e-17 Eigenvector 8: 2.119704204950956e-17 Eigenvector 9: 1.665567688286435e-17 Eigenvector 10: 1.665567688286435e-17 Eigenvector 11: 8.696681541121664e-18 Eigenvector 12: 3.4615924706522496e-18First, we will create a new matrix W using the first two eigenvectors:W_matrix = np.hstack((eigen_pairs[0][1].reshape(13,1), eigen_pairs[1][1].reshape(13,1))).realNext, we will save the dot product of X and W into a new matrix Y:Y = X∗Wwhere, X = n x d matrix with n sample and d dimensions.            Y = n x k matrix with n sample and k dimensions.In simple terms, Y is the new matrix or the new feature space.X_lda = np.array(X.dot(W_matrix))Our next work is to encode every class a member in order to incorporate the class labels into our plot. This is done because matplotlib cannot handle categorical variables directly.Finally, we plot the data as a function of the two LDA components using different color for each class:plt.xlabel('LDA1') plt.ylabel('LDA2') plt.scatter( X_lda[:,0], X_lda[:,1], c=y, cmap='rainbow', alpha=0.7, edgecolors='b' )<matplotlib.collections.PathCollection at 0x7fd08a20e908>How to implement LDA using scikit-learn?For implementing LDA using scikit-learn, let’s work with the same wine dataset. You can also obtain it from the  UCI machine learning repository. You can use the predefined class LinearDiscriminant Analysis made available to us by the scikit-learn library to implement LDA rather than implementing from scratch every time:from sklearn.discriminant_analysis import LinearDiscriminantAnalysis lda_model = LinearDiscriminantAnalysis() X_lda = lda_model.fit_transform(X, y)To obtain the variance corresponding to each component, you can access the following property:lda.explained_variance_ratio_array([0.6875, 0.3125])Again, we will plot the two LDA components just like we did before:plt.xlabel('LDA1') plt.ylabel('LDA2') plt.scatter( X_lda[:,0], X_lda[:,1],    c=y, cmap='rainbow',    alpha=0.7, edgecolors='b' )<matplotlib.collections.PathCollection at 0x7fd089f60358>Linear Discriminant Analysis vs PCABelow are the differences between LDA and PCA:PCA ignores class labels and focuses on finding the principal components that maximizes the variance in a given data. Thus it is an unsupervised algorithm. On the other hand, LDA is a supervised algorithm that intends to find the linear discriminants that represents those axes which maximize separation between different classes.LDA performs better multi-class classification tasks than PCA. However, PCA performs better when the sample size is comparatively small. An example would be comparisons between classification accuracies that are used in image classification.Both LDA and PCA are used in case of dimensionality reduction. PCA is first followed by LDA.Let us create and fit an instance of the PCA class:from sklearn.decomposition import PCA pca_class = PCA(n_components=2) X_pca = pca.fit_transform(X, y)Again, to view the values in percentage for a better understanding, we will access the explained_variance_ratio_ property:pca.explained_variance_ratio_array([0.9981, 0.0017])Clearly, PCA selected the components which will be able to retain the most information and ignores the ones which maximize the separation between classes.plt.xlabel('PCA1') plt.ylabel('PCA2') plt.scatter( X_pca[:,0], X_pca[:,1],    c=y, cmap='rainbow',    alpha=0.7, edgecolors='bNow to create a classification model using the LDA components as features, we will divide the data into training datasets and testing datasets:X_train, X_test, y_train, y_test = train_test_split(X_lda, y, random_state=1)The next thing we will do is create a Decision Tree. Then, we will predict the category of each sample test and create a confusion matrix to evaluate the LDA model’s performance:data = DecisionTreeClassifier() data.fit(X_train, y_train) y_pred = data.predict(X_test) confusion_matrix(y_test, y_pred)array([[18,  0,  0],  [ 0, 17,  0],  [ 0,  0, 10]])So it is clear that the Decision Tree Classifier has correctly classified everything in the test dataset.What are the extensions to LDA?LDA is considered to be a very simple and effective method, especially for classification techniques. Since it is simple and well understood, so it has a lot of extensions and variations:Quadratic Discriminant Analysis(QDA) – When there are multiple input variables, each of the class uses its own estimate of variance and covariance.Flexible Discriminant Analysis(FDA) – This technique is performed when a non-linear combination of inputs is used as splines.Regularized Discriminant Analysis(RDA) – It moderates the influence of various variables in LDA by regularizing the estimate of the covariance.Real-Life Applications of LDASome of the practical applications of LDA are listed below:Face Recognition – LDA is used in face recognition to reduce the number of attributes to a more manageable number before the actual classification. The dimensions that are generated are a linear combination of pixels that forms a template. These are called Fisher’s faces.Medical – You can use LDA to classify the patient disease as mild, moderate or severe. The classification is done upon the various parameters of the patient and his medical trajectory. Customer Identification – You can obtain the features of customers by performing a simple question and answer survey. LDA helps in identifying and selecting which describes the properties of a group of customers who are most likely to buy a particular item in a shopping mall. SummaryLet us take a look at the topics we have covered in this article: Dimensionality Reduction and need for LDA Working of an LDA model Representation, Learning, Prediction and preparing data in LDA Implementation of an LDA model Implementation of LDA using scikit-learn LDA vs PCA Extensions and Applications of LDA The Linear Discriminant Analysis in Python is a very simple and well-understood approach of classification in machine learning. Though there are other dimensionality reduction techniques like Logistic Regression or PCA, but LDA is preferred in many special classification cases. If you want to be an expert in machine learning, knowledge of Linear Discriminant Analysis would lead you to that position effortlessly. Enrol in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape and become a certified Data Scientist.
Rated 4.5/5 based on 12 customer reviews

What is LDA: Linear Discriminant Analysis for Machine Learning

8809
What is LDA: Linear Discriminant Analysis for Machine Learning

Linear Discriminant Analysis or LDA is a dimensionality reduction technique. It is used as a pre-processing step in Machine Learning and applications of pattern classification. The goal of LDA is to project the features in higher dimensional space onto a lower-dimensional space in order to avoid the curse of dimensionality and also reduce resources and dimensional costs.

The original technique was developed in the year 1936 by Ronald A. Fisher and was named Linear Discriminant or Fisher's Discriminant Analysis. The original Linear Discriminant was described as a two-class technique. The multi-class version was later generalized by C.R Rao as Multiple Discriminant Analysis. They are all simply referred to as the Linear Discriminant Analysis.

LDA is a supervised classification technique that is considered a part of crafting competitive machine learning models. This category of dimensionality reduction is used in areas like image recognition and predictive analysis in marketing.

What is Dimensionality Reduction?

The techniques of dimensionality reduction are important in applications of Machine Learning, Data Mining, Bioinformatics, and Information Retrieval. The main agenda is to remove the redundant and dependent features by changing the dataset onto a lower-dimensional space.

In simple terms, they reduce the dimensions (i.e. variables) in a particular dataset while retaining most of the data.

Multi-dimensional data comprises multiple features having a correlation with one another. You can plot multi-dimensional data in just 2 or 3 dimensions with dimensionality reduction. It allows the data to be presented in an explicit manner which can be easily understood by a layman.

What are the limitations of Logistic Regression?

Logistic Regression is a simple and powerful linear classification algorithm. However, it has some disadvantages which have led to alternate classification algorithms like LDA. Some of the limitations of Logistic Regression are as follows:

  • Two-class problems – Logistic Regression is traditionally used for two-class and binary classification problems. Though it can be extrapolated and used in multi-class classification, this is rarely performed. On the other hand, Linear Discriminant Analysis is considered a better choice whenever multi-class classification is required and in the case of binary classifications, both logistic regression and LDA are applied.
  • Unstable with Well-Separated classes – Logistic Regression can lack stability when the classes are well-separated. This is where LDA comes in.
  • Unstable with few examples – If there are few examples from which the parameters are to be estimated, logistic regression becomes unstable. However, Linear Discriminant Analysis is a better option because it tends to be stable even in such cases.

How to have a practical approach to an LDA model?

Consider a situation where you have plotted the relationship between two variables where each color represents a different class. One is shown with a red color and the other with blue.

How to have a practical approach to an LDA model?

If you are willing to reduce the number of dimensions to 1, you can just project everything to the x-axis as shown below: 

How to have a practical approach to an LDA model?

How to have a practical approach to an LDA model?

This approach neglects any helpful information provided by the second feature. However, you can use LDA to plot it. The advantage of LDA is that it uses information from both the features to create a new axis which in turn minimizes the variance and maximizes the class distance of the two variables.

How to have a practical approach to an LDA model?

How to have a practical approach to an LDA model?

How does LDA work?

LDA focuses primarily on projecting the features in higher dimension space to lower dimensions. You can achieve this in three steps:

  • Firstly, you need to calculate the separability between classes which is the distance between the mean of different classes. This is called the between-class variance.

How does LDA work?

  • Secondly, calculate the distance between the mean and sample of each class. It is also called the within-class variance.

How does LDA work?

  • Finally, construct the lower-dimensional space which maximizes the between-class variance and minimizes the within-class variance. P is considered as the lower-dimensional space projection, also called Fisher’s criterion.

How does LDA work?

How are LDA models represented?

The representation of LDA is pretty straight-forward. The model consists of the statistical properties of your data that has been calculated for each class. The same properties are calculated over the multivariate Gaussian in the case of multiple variables. The multivariates are means and covariate matrix.

Predictions are made by providing the statistical properties into the LDA equation. The properties are estimated from your data. Finally, the model values are saved to file to create the LDA model.

How do LDA models learn?

The assumptions made by an LDA model about your data:

  • Each variable in the data is shaped in the form of a bell curve when plotted,i.e. Gaussian.
  • The values of each variable vary around the mean by the same amount on the average,i.e. each attribute has the same variance.

The LDA model is able to estimate the mean and variance from your data for each class with the help of these assumptions.

The mean value of each input for each of the classes can be calculated by dividing the sum of values by the total number of values:

Mean =Sum(x)/Nk

where Mean = mean value of x for class
           N = number of
           k = number of
           Sum(x) = sum of values of each input x.

The variance is computed across all the classes as the average of the square of the difference of each value from the mean:

Σ²=Sum((x - M)²)/(N - k)

where  Σ² = Variance across all inputs x.
            N = number of instances.
            k = number of classes.
            Sum((x - M)²) = Sum of values of all (x - M)².
            M = mean for input x.

How does an LDA model make predictions?

LDA models use Bayes’ Theorem to estimate probabilities. They make predictions based upon the probability that a new input dataset belongs to each class. The class which has the highest probability is considered the output class and then the LDA makes a prediction.  

The prediction is made simply by the use of Bayes’ Theorem which estimates the probability of the output class given the input. They also make use of the probability of each class and the probability of the data belonging to each class:

P(Y=x|X=x)  = [(Plk * fk(x))] / [(sum(PlI * fl(x))]

Where x = input.
            k = output class.
            Plk = Nk/n or base probability of each class observed in the training data. It is also called prior probability in Bayes’ Theorem.
            fk(x) = estimated probability of x belonging to class k.

The f(x) is plotted using a Gaussian Distribution function and then it is plugged into the equation above and the result we get is the equation as follows:

Dk(x) = x∗(mean/Σ²) – (mean²/(2*Σ²)) + ln(PIk)

The Dk(x) is called the discriminant function for class k given input x, mean,  Σ² and Plk are all estimated from the data and the class is calculated as having the largest value, will be considered in the output classification.  

How to prepare data from LDA?

Some suggestions you should keep in mind while preparing your data to build your LDA model:

  • LDA is mainly used in classification problems where you have a categorical output variable. It allows both binary classification and multi-class classification.
  • The standard LDA model makes use of the Gaussian Distribution of the input variables. You should check the univariate distributions of each attribute and transform them into a more Gaussian-looking distribution. For example, for the exponential distribution, use log and root function and for skewed distributions use BoxCox.
  • Outliers can skew the primitive statistics used to separate classes in LDA, so it is preferable to remove them.
  • Since LDA assumes that each input variable has the same variance, it is always better to standardize your data before using an LDA model. Keep the mean to be 0 and the standard deviation to be 1.

How to implement an LDA model from scratch?

You can implement a Linear Discriminant Analysis model from scratch using Python. Let’s start by importing the libraries that are required for the model:

from sklearn.datasets import load_wine
import pandas as pd
import numpy as np
np.set_printoptions(precision=4)
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix

Since we will work with the wine dataset, you can obtain it from the UCI machine learning repository. The scikit-learn library in Python provides a wrapper function for downloading it:

wine_info = load_wine()
X = pd.DataFrame(wine_info.data, columns=wine_info.feature_names)
y = pd.Categorical.from_codes(wine_info.target, wine_info.target_names)

The wine dataset comprises of 178 rows of 13 columns each:

X.shape
(178, 13)

The attributes of the wine dataset comprise of various characteristics such as alcohol content of the wine, magnesium content, color intensity, hue and many more:

X.head()

How to implement an LDA model from scratch?

The wine dataset contains three different kinds of wine:

wine_info.target_names 
array(['class_0', 'class_1', 'class_2'], dtype='<U7')

Now we create a DataFrame which will contain both the features and the content of the dataset:

df = X.join(pd.Series(y, name='class'))

We can divide the process of Linear Discriminant Analysis into 5 steps as follows:

Step 1 - Computing the within-class and between-class scatter matrices.
Step 2 - Computing the eigenvectors and their corresponding eigenvalues for the scatter matrices.
Step 3 - Sorting the eigenvalues and selecting the top k.
Step 4 - Creating a new matrix that will contain the eigenvectors mapped to the k eigenvalues.
Step 5 - Obtaining new features by taking the dot product of the data and the matrix from Step 4.

Within-class scatter matrix

To calculate the within-class scatter matrix, you can use the following mathematical expression:

Within-class scatter matrix

where, c = total number of distinct classes and

Within-class scatter matrix

Within-class scatter matrix

where, x = a sample (i.e. a row).
            n = total number of samples within a given class.

Now we create a vector with the mean values of each feature:

feature_means1 = pd.DataFrame(columns=wine_info.target_names)
for c, rows in df.groupby('class'):
feature_means1[c] = rows.mean()
feature_means1

Within-class scatter matrix

The mean vectors (mi ) are now plugged into the above equations to obtain the within-class scatter matrix:

withinclass_scatter_matrix = np.zeros((13,13))
for c, rows in df.groupby('class'):
rows = rows.drop(['class'], axis=1)

s = np.zeros((13,13))
for index, row in rows.iterrows():
x, mc = row.values.reshape(13,1),
feature_means1[c].values.reshape(13,1)

s += (x - mc).dot((x - mc).T)

withinclass_scatter_matrix += s

Between-class scatter matrix

We can calculate the between-class scatter matrix using the following mathematical expression:

Between-class scatter matrix

where,

Between-class scatter matrix

and

Between-class scatter matrix

feature_means2 = df.mean()
betweenclass_scatter_matrix = np.zeros((13,13))
for c in feature_means1:    
   n = len(df.loc[df['class'] == c].index)
   mc, m = feature_means1[c].values.reshape(13,1), 
feature_means2.values.reshape(13,1)
betweenclass_scatter_matrix += n * (mc - m).dot((mc - m).T)

Now we will solve the generalized eigenvalue problem to obtain the linear discriminants for:

eigen_values, eigen_vectors = 
np.linalg.eig(np.linalg.inv(withinclass_scatter_matrix).dot(betweenclass_scatter_matrix))

We will sort the eigenvalues from the highest to the lowest since the eigenvalues with the highest values carry the most information about the distribution of data is done. Next, we will first k eigenvectors. Finally, we will place the eigenvalues in a temporary array to make sure the eigenvalues map to the same eigenvectors after the sorting is done:

eigen_pairs = [(np.abs(eigen_values[i]), eigen_vectors[:,i]) for i in range(len(eigen_values))]
eigen_pairs = sorted(eigen_pairs, key=lambda x: x[0], reverse=True)
for pair in eigen_pairs:
print(pair[0])
237.46123198302251
46.98285938758684
1.4317197551638386e-14
1.2141209883217706e-14
1.2141209883217706e-14
8.279823065850476e-15
7.105427357601002e-15
6.0293733655173466e-15
6.0293733655173466e-15
4.737608877108813e-15
4.737608877108813e-15
2.4737196789039026e-15
9.84629525010022e-16

Now we will transform the values into percentage since it is difficult to understand how much of the variance is explained by each component.

sum_of_eigen_values = sum(eigen_values)
print('Explained Variance')
for i, pair in enumerate(eigen_pairs):
   print('Eigenvector {}: {}'.format(i, (pair[0]/sum_of_eigen_values).real))
Explained Variance
Eigenvector 0: 0.8348256799387275
Eigenvector 1: 0.1651743200612724
Eigenvector 2: 5.033396012077518e-17
Eigenvector 3: 4.268399397827047e-17
Eigenvector 4: 4.268399397827047e-17
Eigenvector 5: 2.9108789097898625e-17
Eigenvector 6: 2.498004906118145e-17
Eigenvector 7: 2.119704204950956e-17
Eigenvector 8: 2.119704204950956e-17
Eigenvector 9: 1.665567688286435e-17
Eigenvector 10: 1.665567688286435e-17
Eigenvector 11: 8.696681541121664e-18
Eigenvector 12: 3.4615924706522496e-18

First, we will create a new matrix W using the first two eigenvectors:

W_matrix = np.hstack((eigen_pairs[0][1].reshape(13,1), eigen_pairs[1][1].reshape(13,1))).real

Next, we will save the dot product of X and W into a new matrix Y:

Y = X∗W

where, X = n x d matrix with n sample and d dimensions.
            Y = n x k matrix with n sample and k dimensions.

In simple terms, Y is the new matrix or the new feature space.

X_lda = np.array(X.dot(W_matrix))

Our next work is to encode every class a member in order to incorporate the class labels into our plot. This is done because matplotlib cannot handle categorical variables directly.

Finally, we plot the data as a function of the two LDA components using different color for each class:

plt.xlabel('LDA1')
plt.ylabel('LDA2')
plt.scatter(
X_lda[:,0],
X_lda[:,1],
c=y,
cmap='rainbow',
alpha=0.7,
edgecolors='b'
)
<matplotlib.collections.PathCollection at 0x7fd08a20e908>

Between-class scatter matrix

How to implement LDA using scikit-learn?

For implementing LDA using scikit-learn, let’s work with the same wine dataset. You can also obtain it from the  UCI machine learning repository. 

You can use the predefined class LinearDiscriminant Analysis made available to us by the scikit-learn library to implement LDA rather than implementing from scratch every time:

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda_model = LinearDiscriminantAnalysis()
X_lda = lda_model.fit_transform(X, y)

To obtain the variance corresponding to each component, you can access the following property:

lda.explained_variance_ratio_
array([0.6875, 0.3125])

Again, we will plot the two LDA components just like we did before:

plt.xlabel('LDA1')
plt.ylabel('LDA2')
plt.scatter(
X_lda[:,0],
X_lda[:,1],
   c=y,
cmap='rainbow',
   alpha=0.7,
edgecolors='b'
)
<matplotlib.collections.PathCollection at 0x7fd089f60358>

How to implement LDA using scikit-learn?

Linear Discriminant Analysis vs PCA

Below are the differences between LDA and PCA:

  • PCA ignores class labels and focuses on finding the principal components that maximizes the variance in a given data. Thus it is an unsupervised algorithm. On the other hand, LDA is a supervised algorithm that intends to find the linear discriminants that represents those axes which maximize separation between different classes.
  • LDA performs better multi-class classification tasks than PCA. However, PCA performs better when the sample size is comparatively small. An example would be comparisons between classification accuracies that are used in image classification.
  • Both LDA and PCA are used in case of dimensionality reduction. PCA is first followed by LDA.

Linear Discriminant Analysis vs PCA

Let us create and fit an instance of the PCA class:

from sklearn.decomposition import PCA
pca_class = PCA(n_components=2)
X_pca = pca.fit_transform(X, y)

Again, to view the values in percentage for a better understanding, we will access the explained_variance_ratio_ property:

pca.explained_variance_ratio_
array([0.9981, 0.0017])

Clearly, PCA selected the components which will be able to retain the most information and ignores the ones which maximize the separation between classes.

plt.xlabel('PCA1')
plt.ylabel('PCA2')
plt.scatter(
   X_pca[:,0],
   X_pca[:,1],
   c=y,
   cmap='rainbow',
   alpha=0.7,
   edgecolors='b

Linear Discriminant Analysis vs PCA

Now to create a classification model using the LDA components as features, we will divide the data into training datasets and testing datasets:

X_train, X_test, y_train, y_test = train_test_split(X_lda, y, random_state=1)

The next thing we will do is create a Decision Tree. Then, we will predict the category of each sample test and create a confusion matrix to evaluate the LDA model’s performance:

data = DecisionTreeClassifier()
data.fit(X_train, y_train)
y_pred = data.predict(X_test)
confusion_matrix(y_test, y_pred)
array([[18,  0,  0], 
       [ 0, 17,  0], 
       [ 0,  0, 10]])

So it is clear that the Decision Tree Classifier has correctly classified everything in the test dataset.

What are the extensions to LDA?

LDA is considered to be a very simple and effective method, especially for classification techniques. Since it is simple and well understood, so it has a lot of extensions and variations:

  • Quadratic Discriminant Analysis(QDA) – When there are multiple input variables, each of the class uses its own estimate of variance and covariance.
  • Flexible Discriminant Analysis(FDA) – This technique is performed when a non-linear combination of inputs is used as splines.
  • Regularized Discriminant Analysis(RDA) – It moderates the influence of various variables in LDA by regularizing the estimate of the covariance.

Real-Life Applications of LDA

Some of the practical applications of LDA are listed below:

  • Face Recognition – LDA is used in face recognition to reduce the number of attributes to a more manageable number before the actual classification. The dimensions that are generated are a linear combination of pixels that forms a template. These are called Fisher’s faces.
  • Medical – You can use LDA to classify the patient disease as mild, moderate or severe. The classification is done upon the various parameters of the patient and his medical trajectory. 
  • Customer Identification – You can obtain the features of customers by performing a simple question and answer survey. LDA helps in identifying and selecting which describes the properties of a group of customers who are most likely to buy a particular item in a shopping mall. 

Summary

Let us take a look at the topics we have covered in this article: 

  • Dimensionality Reduction and need for LDA 
  • Working of an LDA model 
  • Representation, Learning, Prediction and preparing data in LDA 
  • Implementation of an LDA model 
  • Implementation of LDA using scikit-learn 
  • LDA vs PCA 
  • Extensions and Applications of LDA 

The Linear Discriminant Analysis in Python is a very simple and well-understood approach of classification in machine learning. Though there are other dimensionality reduction techniques like Logistic Regression or PCA, but LDA is preferred in many special classification cases. If you want to be an expert in machine learning, knowledge of Linear Discriminant Analysis would lead you to that position effortlessly. Enrol in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape and become a certified Data Scientist.

Priyankur

Priyankur Sarkar

Data Science Enthusiast

Priyankur Sarkar loves to play with data and get insightful results out of it, then turn those data insights and results in business growth. He is an electronics engineer with a versatile experience as an individual contributor and leading teams, and has actively worked towards building Machine Learning capabilities for organizations.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

How to become a dependable Data Scientist

The job profile of the data scientist looks set to retain its title as the 21st century’s hottest job through 2020 and beyond. A recent study by IBM found that the demand for data scientist will soar by a whopping 28% in 2020. For aspiring data professionals, this is good news as it means an abundance of data science opportunities. Check out this infographic to find out what you can do to capitalize on the growing opportunity in data science.
Rated 4.5/5 based on 12 customer reviews
How to become a dependable Data Scientist

The job profile of the data scientist looks set to... Read More

Essential Skills to Become a Data Scientist

The demand for Data Science professionals is now at an all-time high. There are companies in virtually every industry looking to extract the most value from the heaps of information generated on a daily basis.With the trend for Data Science catching up like never before, organizations are making complete use of their internal data assets to further examine the integration of hundreds of third-party data sources. What is crucial here is the role of the data scientists.Not very long back, the teams playing the key role of working on the data always found their places in the back rooms of multifold IT organizations. The teams though sitting on the backseat would help in steering the various corporate systems with the required data that acted as the fuel to keep the activities running. The critical database tasks performed by the teams responsible allowed corporate executives to report on operations activities and deliver financial results.When you take up a career in Data Science, your previous experience or skills do not matter. As a matter of fact, you would need a whole new range of skills to pursue a career in Data Science. Below are the skills required to become a top dog in Data Science.What should Data Scientists knowData scientists are expected to have knowledge and expertise in the following domains:The areas arch over dozens of languages, frameworks, and technologies that data scientists need to learn. Data scientists should always have the curiosity to amass more knowledge in their domain so that they stay relevant in this dynamic field.The world of Data Science demands certain important attributes and skills, according to IT leaders, industry analysts, data scientists, and others.How to become a Data Scientist?A majority of Data scientists already have a Master’s degree. If Master’s degree does not quench their thirst for more degrees, some even go on to acquire PhD degrees. Mind you, there are exceptions too. It isn’t mandatory that you should be an expert in a particular subject to become a Data Scientist. You could become one even with a qualification in Computer Science, Physical Sciences, Natural Sciences, Statistics or even Social Sciences. However, a degree in Mathematics and Statistics is always an added benefit for enhanced understanding of the concepts.Qualifying with a degree is not the end of the requirements. Brush up your skills by taking online lessons in a special skill set of your choice — get certified on how to use Hadoop, Big Data or R. You can also choose to enroll yourself for a Postgraduate degree in the field of Data Science, Mathematics or any other related field.Remember, learning does not end with earning a degree or certification. You need to practice what you learned — blog and share your knowledge, build an app and explore other avenues and applications of data.The Data Scientists of the modern world have a major role to play in businesses across the globe. They have the ability to extract useful insights from vast amounts of raw data using sophisticated techniques. The business acumen of the Data Scientists help a big deal in predicting what lies ahead for enterprises. The models that the Data Scientists create also bring out measures to mitigate potential threats if any.Take up organizational challenges with ABCDE skillsetAs a Data Scientist, you may have to face challenges while working on projects and finding solutions to problems.A = AnalyticsIf you are a Data Scientist, you are expected not just to study the data and identify the right tools and techniques; you need to have your answers ready to all the questions that come across while you are strategizing on working on a solution with or without a business model.B = Business AcumenOrganizations vouch for candidates with strong business acumen. As a Data Scientist, you are expected to showcase your skills in a way that will make the organization stand one step ahead of the competition. Undertaking a project and working on it is not the end of the path scaled by you. You need to understand and be able to make others understand how your business models influence business outcomes and how the outcomes will prove beneficial to the organization.C = CodingAnd a Data Scientist is expected to be adept at coding too. You may encounter technical issues where you need to sit and work on codes. If you know how to code, it will make you further versatile in confidently assisting your team.D = DomainThe world does not expect Data Scientists to be perfect with knowledge of all domains. However, it is always assumed that a Data Scientist has know-how of various industrial operations. Reading helps as a plus point. You can gain knowledge in various domains by reading the resources online.E = ExplainTo be a successful Data Scientist, you should be able to explain the problem you are faced with to figure out a solution to the problem and share it with the relevant stakeholders. You need to create a difference in the way you explain without leaving any communication gaps.The Important Skills for a Data ScientistLet us now understand the important skills to become an expert Data Scientist – all the skills that go in, to become one. The skills are as follows:Critical thinkingCodingMathML, DL, AICommunication1. Critical thinkingData scientists need to keep their brains racing with critical thinking. They should be able to apply the objective analysis of facts when faced with a complex problem. Upon reaching a logical analysis, a data scientist should formulate opinions or render judgments.Data scientists are counted upon for their understanding of complex business problems and the risks involved with decision-making. Before they plunge into the process of analysis and decision-making, data scientists are required to come up with a 'model' or 'abstract' on what is critical to coming up with the solution to a problem. Data scientists should be able to determine the factors that are extraneous and can be ignored while churning out a solution to a complex business problem.According to Jeffry Nimeroff, CIO at Zeta Global, which provides a cloud-based marketing platform – A data scientist needs to have experience but also have the ability to suspend belief...Before arriving at a solution, it is very important for a Data Scientist to be very clear on what is being expected and if the expected solution can be arrived at. It is only with experience that your intuition works stronger. Experience brings in benefits.If you are a novice and a problem is posed in front of you; all that the one who put the problem in front of you would get is a wide-eyed expression, perhaps. Instead, if you have hands-on experience of working with complex problems no matter what, you will step back, look behind at your experience, draw some inference from multiple points of view and try assessing the problem that is put forth.In simple steps, critical thinking involves the following steps:a. Describe the problem posed in front of you.b. Analyse the arguments involved – The IFs and BUTs.c. Evaluate the significance of the decisions being made and the successes or failures thereafter.2. CodingHandling a complex task might at times call for the execution of a chain of programming tasks. So, if you are a data scientist, you should know how to go about writing code. It does not stop at just writing the code; the code should be executable and should be crucial in helping you find a solution to a complex business problem.In the present scenario, Data Scientists are more inclined towards learning and becoming an expert with Python as the language of choice. There is a substantial crowd following R as well. Scala, Clojure, Java and Octave are a few other languages that find prominence too.Consider the following aspects to be a successful Data Scientist that can dab with programming skills –a) You need to deal with humongous volumes of data.b) Working with real-time data should be like a cakewalk for you.c) You need to hop around cloud computing and work your way with statistical models like the ones shown below:Different Statistical ModelsRegressionOptimizationClusteringDecision treesRandom forestsData scientists are expected to understand and have the ability to code in a bundle of languages – Python, C++ or Java.Gaining the knack to code helps Data Scientists; however, this is not the end requirement. A Data Scientist can always be surrounded by people who code.3. MathIf you have never liked Mathematics as a subject or are not proficient in Mathematics, Data Science is probably not the right career choice for you.You might own an organization or you might even be representing it; the fact is while you engage with your clients, you might have to look into many disparate issues. To deal with the issues that lay in front of you, you will be required to develop complex financial or operational models. To finally be able to build a worthy model, you will end up pulling chunks from large volumes of data. This is where Mathematics helps you.If you have the expertise in Mathematics, building statistical models is easier. Statistical models further help in developing or switching over to key business strategies. With skills in both Mathematics and Statistics, you can get moving in the world of Data Science. Spell the mantra of Mathematics and Statistics onto your lamp of Data Science, lo and behold you can be the genie giving way to the best solutions to the most complex problems.4. Machine learning, Deep Learning, AIData Science overlaps with the fields of Machine Learning, Deep Learning and AI.There is an increase in the way we work with computers, we now have enhanced connectivity; a large amount of data is being collected and industries make use of this data and are moving extremely fast.AI and deep learning may not show up in the requirements of job postings; yet, if you have AI and deep learning skills, you end up eating the big pie.A data scientist needs to be hawk-eyed and alert to the changes in the curve while research is in progress to come up with the best methodology to a problem. Coming up with a model might not be the end. A Data Scientist must be clear as to when to apply which practice to solve a problem without making it more complex.Data scientists need to understand the depth of problems before finding solutions. A data scientist need not go elsewhere to study the problems; all that is there in the data fetched is what is needed to bring out the best solution.A data scientist should be aware of the computational costs involved in building an environment and the following system boundary conditions:a. Interpretabilityb. Latencyc. BandwidthStudying a customer can act as a major plus point for both a data scientist and an organization… This helps in understanding what technology to apply.No matter how generations advance with the use of automated tools and open source is readily available, statistical skills are considered the much-needed add-ons for a data scientist.Understanding statistics is not an easy job; a data scientist needs to be competent to comprehend the assumptions made by the various tools and software.Experts have put forth a few important requisites for data scientists to make the best use of their models:Data scientists need to be handy with proper data interpretation techniques and ought to understand –a. the various functional interfaces to the machine learning algorithmsb. the statistics within the methodsIf you are a data scientist, try dabbing your profile with colours of computer science skills. You must be proficient in working with the keyboard and have a sound knowledge of fundamentals in software engineering.5. CommunicationCommunication and technology show a cycle of operations wherein, there is an integration between people, applications, systems, and data. Data science does not stand separate in this. Working with Data Science is no different. As a Data Scientist, you should be able to communicate with various stakeholders. Data plays a key attribute in the wheel of communication.Communication in Data Science ropes in the ‘storytelling’ ability. This helps you translate a solution you have arrived at into action or intervention that you have put in the pipeline. As a Data Scientist, you should be adept at knitting with the data you have extracted and communicated it clearly to your stakeholders.What does a data scientist communicate to the stakeholders?The benefits of dataThe technology and the computational costs involved in the process of extracting and making use of the dataThe challenges posed in the form of data quality, privacy, and confidentialityA Data Scientist also needs to keep an eye on the wide horizons for better prospects. The organization can be shown a map highlighting other areas of interest that can prove beneficial.If you are a Data Scientist with different feathers in your cap, one being that of a good communicator, you should be able to change a complex form of technical information to a simple and compact form before you present it to the various stakeholders. The information should highlight the challenges, the details of the data, the criteria for success and the anticipated results.If you want to excel in the field of Data Science, you must have an inquisitive bent of mind. The more you ask questions, the more information you gather, the easier it is to come up with paramount business models.6. Data architectureLet us draw some inference from the construction of a building and the role of an architect. Architects have the most knowledge of how the different blocks of buildings can go together and how the different pillars for a block make a strong support system. Like how architects manage and coordinate the entire construction process, so do the Data Scientists while building business models.A Data Scientist needs to understand all that happens to the data from the inception level to when it becomes a model and further until a decision is made based on the model.Not understanding the data architecture can have a tremendous impact on the assumptions made in the process and the decisions arrived at. If a Data Scientist is not familiar with the data architecture, it may lead to the organization taking wrong decisions leading to unexpected and unfavourable results.A slight change within the architecture might lead to situations getting worse for all the involved stakeholders.7. Risk analysis, process improvement, systems engineeringA Data Scientist with sharp business acumen should have the ability to analyse business risks, suggest improvements if any and facilitate further changes in various business processes. As a Data Scientist, you should understand how systems engineering works.If you want to be a Data Scientist and have sharp risk analysis, process improvement and systems engineering skills, you can set yourself for a smooth sail in this vast sea of Data Science.And, rememberYou will no more be a Data Scientist if you stop following scientific theories… After all, Data Science in itself is a major breakthrough in the field of Science.It is always recommended to analyse all the risks that may confront a business before embarking on a journey of model development. This helps in mitigating risks that an organization may have to encounter later. For a smooth business flow, a Data Scientist should also have the nature to probe into the strategies of the various stakeholders and the problems encountered by customers.A Data Scientist should be able to get the picture of the prevailing risks or the various systems that can have a whopping impact on the data or if a model can lead to positive fruition in the form of customer satisfaction.8. Problem-solving and strong business acumenData scientists are not very different when compared to the commoners. We can say this on the lines of problem-solving. The problem solving traits are inherent in every human being. What makes a data scientist stand apart is very good problem-solving skills. We come across complex problems even in everyday situations. How we differ in solving problems is in the perspectives that we apply. Understanding and analyzing before moving on to actually solving the problems by pulling out all the tools in practice is what Data Scientists are good at.The approach that a Data Scientist takes to solve a problem reaps more success than failure. With their approach, they bring critical thinking to the forefront.  Finding a Data Scientist with skill sets at variance is a problem faced by most of the employers.Technical Skills for a Data ScientistWhen the employers are on a hunt to trap the best, they look out for specialization in languages, libraries, and expertise in tech tools. If a candidate comes in with experience, it helps in boosting the profile.Let us see some very important technical skills:PythonRSQLHadoop/Apache SparkJava/SASTableauLet us briefly understand how these languages are in demand.PythonPython is one of the most in-demand languages. This has gained immense popularity as an open-source language. It is widely used both by beginners and experts. Data Scientists need to have Python as one of the primary languages in their kit.RR is altogether a new programming language for statisticians. Anyone with a mathematical bent of mind can learn it. Nevertheless, if you do not appreciate the nuances of Mathematics then it’s difficult to understand R. This never means that you cannot learn it, but without having that mathematical creativity, you cannot harness the power of it.SQLStructured Query Language or SQL is also highly in demand. The language helps in interacting with relational databases. Though it is not of much prominence yet, with a know-how in SQL you can gain a stand in the job market.Hadoop & SparkBoth Hadoop and Spark are open source tools from Apache for big data.Apache Hadoop is an open source software platform. Apache Hadoop helps when you have large data sets on computer clusters built from commodity hardware and you find it difficult to store and process the data sets.Apache Spark is a lightning-fast cluster computing and data processing engine designed for fast computation. It comes with a bunch of development APIs. It supports data workers with efficient execution of streaming, machine learning or SQL workloads.Java & SASWe also have Java and SAS joining the league of languages. These are in-demand languages by large players. Employers offer whopping packages to candidates with expertise in Java and SAS.TableauTableau joins the list as an analytics platform and visualization tool. The tool is powerful and user-friendly. The public version of the tool is available for free. If you wish to keep your data private, you have to consider the costs involved too.Easy tips for a Data ScientistLet us see the in-demand skill set for a Data Scientist in brief.a. A Data Scientist should have the acumen to handle data processing and go about setting models that will help various business processes.b. A Data Scientist should understand the depth of a business problem and the structure of the data that will be used in the process of solving it.c. A Data Scientist should always be ready with an explanation on how the created business models work; even the minute details count.A majority of the crowd out there is good at Maths, Statistics, Engineering or other related subjects. However, when interviewed, they may not show the required traits and when recruited may fail to shine in their performance levels. Sometimes the recruitment process to hire a Data Scientist gets so tedious that employers end up searching with lanterns even in broad daylight. Further, the graphical representation below shows some smart tips for smart Data Scientists.Smart tips for a Data ScientistWhat employers seek the most from Data Scientists?Let us now throw some light into what employers seek the most from Data Scientists:a. A strong sense of analysisb. Machine learning is at the core of what is sought from Data Scientists.c. A Data Scientist should infer and refer to data that has been in practice and will be in practice.d. Data Scientists are expected to be adept at Machine Learning and create models predicting the performance on the basis of demand.e. And, a big NOD to a combo skill set of statistics, Computer Science and Mathematics.Following screenshot shows the requirements of a topnotch employer from a Data Scientist. The requirements were posted on a jobs’ listing website.Let us do a sneak peek into the same job-listing website and see the skills in demand for a Data Scientist.ExampleRecommendations for a Data ScientistWhat are some general recommendations for Data Scientists in the present scenario? Let us walk you through a few.Exhibit your demonstration skills with data analysis and aim to become learned at Machine Learning.Focus on your communication skills. You would have a tough time in your career if you cannot show what you have and cannot communicate what you know. Experts have recommended reading Made to Stick for far-reaching impact of the ideas that you generate.Gain proficiency in deep learning. You must be familiar with the usage, interest, and popularity of deep learning framework.If you are wearing the hat of a Python expert, you must also have the know-how of common python data science libraries – numpy, pandas, matplotlib, and scikit-learn.ConclusionData Science is all about contributing more data to the technologically advanced world. Make your online presence a worthy one; learn while you earn.Start by browsing through online portals. If you are a professional, make your mark on LinkedIn. Securing a job through LinkedIn is now easier than scouring through job sites.Demonstrate all the skills that you are good at on the social portals you are associated with. Suppose you write an article on LinkedIn, do not refrain from sharing the link to the article on your Facebook account.Most important of all – when faced with a complex situation, understand why and what led to the problem. A deeper understanding of a problem will help you come up with the best model. The more you empathize with a situation, the more will be your success count. And in no time, you can become that extraordinary whiz in Data Science.Wishing you immense success if you happen to choose or have already chosen Data Science as the path for your career.All the best for your career endeavour!
Rated 4.5/5 based on 1 customer reviews
9201
Essential Skills to Become a Data Scientist

The demand for Data Science professionals is now a... Read More

Top Data Science Trends in 2020

Industry experts are of the view that 2020 will be a huge year for data science and AI. The expected growth rate for AI market will be at 118.6 billion by 2025. The focus areas in the overall AI market will include everything from natural language processing to robotic process automation. Since the beginning of digital era, data has been growing at the speed of light! There will only be a surge in this growth. New data will not only generate more innovative use cases but also spearhead a revolution of innovation.About 77% of the devices today have AI incorporated into them. Smart devices, Netflix recommendations, Amazon’s Alexa Google Home have transformed the way we live in the digital age.  The renowned AI-powered virtual nurses “Molly” and “Angel”, have taken healthcare to new heights and robots have already been performing various surgical procedures.Dynamic technologies like data science and AI have some intriguing data science trends to watch out for, in 2020. Check out the top 6 data science trends in 2020 any data science enthusiast should know:1. Advent of Deep LearningSimply put, deep learning is a machine learning technique that trains computers to think and act like humans i.e., by example. Ever since, deep learning models have proven their efficacy by exceeding human limitations and performance. Deep learning models are usually trained using a large set of labelled data and multi-layered neural network architectures.What’s new for Deep Learning in 2020?In 2020, deep learning will be quite significant. Its capacity to foresee and understand human behaviour and how enterprises can utilize this knowledge to stay ahead of their competitors will come in handy.2. Spotlight on Augmented AnalyticsAlso hailed as the future of Business Intelligence, Augmented analytics employs machine learning/artificial intelligence (ML/AI) techniques to automate data preparation, insight discovery and sharing, data science and ML model development, management and deployment. This can be greatly beneficial for companies to improve their offerings and customer experience. The global augmented analytics market size is projected to reach $29,856 million by 2025. Its growth rate is expected to be at a CAGR of 28.4% from 2018 to 2025.What’s New for Augmented Analytics in 2020?This year, augmented analytics platforms will help enterprises leverage social component. The use of interactive dashboards and visualizations in augmented analytics will help stakeholders share important insights and create a crystal-clear narrative that echoes the company’s mission.3. Impact of IoT, ML and AI2020 will see the rise of AI/ML, 5G, cybersecurity and IoT. The rise in automation will create opportunities for new skills to be explored. Upskilling in emerging new technologies will make professionals competent in the dynamic tech space today. As per a survey by IDC, over 75% of organizations will invest in reskilling programs or their workforce to bridge the rising skill gap by 2025.What’s new for IoT, ML and AI in 2020?It has been estimated that over 24 billion devices will be connected to the Internet of Things this year. This means industries can create a world of difference by developing smart devices that make a difference to the way we live.4. Better Mobile Analytics StrategiesMobile analytics deals with measuring and analysing data created across various mobile platform sites and applications alone. It helps businesses keep track of the behaviour of their users on mobile sites and apps. This technology will aid in boosting the cross-channel marketing initiatives of an enterprise, while optimizing the mobile experience and growing user engagement and retention.What’s new for Mobile Analytics in 2020?With the ever-increasing number of mobile phone users globally, there will be a heightened focus on mobile app marketing and app analytics. Currently, mobile advertising ranks first in digital advertising worldwide. This had made mobile analytics quintessential, as businesses today can track in-app traffic, potential security threats, as well as the levels of customer satisfaction.5. Enhanced Levels of CustomizationAccess to real-time data and customer behaviour has made it possible to cater to each customer’s specific needs. As customer expectations soar, companies would have to buckle up to deliver more personalized, relevant and superior customer experience. The use of data and AI will make it possible.What’s new for User Experience in 2020?Interpreting user experience can be done easily using data derived from conversions, pageviews, and other user actions. These insights will help user experience professionals make better decisions while providing the user exactly what he/she needs.6. Better Cybersecurity2019 brought to light the grim reality of data privacy and security breach. With over 24 billion devices estimated to be connected to the internet this year, stringent measures will be deployed by enterprises to protect data privacy and prevent security breach. Industry experts are of the view that a combination of cybersecurity and AI-enabled technology results will lead to effective attack surface coverage that is a 20x more effective attack than traditional methods.What’s new for Cybersecurity in 2020?Since data shows no signs of stopping in its growth, the number of threats will also keep looming on the horizon. In 2020, cybersecurity professionals would have to gear themselves to conjure new and improved ways to secure data. Therefore, AI-supported cybersecurity measures will be deployed to prevent malicious attacks from malware and ensure better cybersecurity.The Way Ahead in 2020Data Science will be one of the fastest-growing technologies in 2020. Its wide range of applications in various industries will pave way for more innovative trends like the ones above.
Rated 4.5/5 based on 0 customer reviews
5865
Top Data Science Trends in 2020

Industry experts are of the view that 2020 will be... Read More