Search

Machine learning Filter

Support Vector Machines in Machine Learning

While many classifiers exist that can classify linearly separable data such as logistic regression, Support Vector Machines can handle highly non-linear problems using a kernel trick which implicitly maps the input vectors to higher-dimensional feature spaces. The transformation rearranges the dataset in such a way that it is then linearly solvable. In this article we are going to look at how SVM works, learn about kernel functions, hyperparameters and pros and cons of SVM along with some of the real life applications of SVM. Support Vector Machines (SVMs), also known as support vector networks, are a family of extremely powerful models which use method based learning and can be used in classification and regression problems. They aim at finding decision boundaries that separate observations with differing class memberships. In other words, SVM is a discriminative classifier formally defined by a separating hyperplane.Method Based Learning There are several learning models namely:Association rules basedEnsemble method basedDeep Learning basedClustering method basedRegression Analysis basedBayesian method basedDimensionality reduction based Instance basedKernel method basedLet us understand what Kernel method based learning is all about.In simple terms, a kernel is a similarity function which is fed into a machine learning algorithm. It accepts two inputs and suggests the similarity. For example, suppose we want to classify images, the input data is a key-value pair (image, label). The image data is taken into consideration, features are computed, and a vector of features are fed into the Machine learning algorithm. But in the case of similarity functions, a kernel function can be defined which internally computes the similarity between images, and then feeds into the learning algorithm along with the images and label data. The outcome of this is a classifier. Perceptron frameworks or Support vector machines work with kernels and use vectors only. Here, the machine learning algorithms are expressed as dot products so that kernel functions can be used.Feature vectors generally prefer kernels. Its ease of computing makes it one of the key reasons, also, feature vectors need more storage space in comparison to dot products. You can writeMachine learning algorithms to use dot products and later map them to use kernels. This completely avoids the usage of feature vectors. This allows us to work with highly complex, efficient-to-compute, and yet high performing kernels effortlessly, without really developing multi-dimensional vectors.Kernel functionsLet us understand what kernel functions are: The figure shown below represents a 1D function using a simple 1-Dimensional example. Assume that given points are as follows, it will depict a vertical line and no other vertical lines will separate the dataset.Now, if we consider a 2-Dimensional representation, as shown in the figure below, there is a hyperplane (an arbitrary line in 2-Dimensions) which separates red and blue points, which can be separated using Support Vector Machines.As we keep increasing dimensional space, the need to be able to separate data will eventually decrease. This mapping, x -> (x, x2), is called the kernel function. In case of growing dimensional space, the computations become more complex and kernel trick needs to be applied to address these computations cheaply. What is Support Vector Machine? Support Vector Machine (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However,  it is mostly used in classification problems. In this algorithm, each data is plotted in n-dimensional space (where n is the number of features you have) with the value of each feature being the value of a particular coordinate. After that, we perform classification by locating the hyperplane which differentiates both the classes.Let us create a dataset to understand support vector classification:# importing scikit learn with make_blobs from sklearn.datasets.samples_generator import make_blobs# creating datasets X containing n_samples # Y containing two classes X, Y = make_blobs(n_samples=500, centers=2,        random_state=0, cluster_std=0.40)# plotting scatters plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring'); plt.show()Support vector machine is based on the concept of decision planes that define decision boundaries. A decision plane is one that separates between a set of objects with different class memberships. For example, in the figure mentioned below, there are objects which belong to either class Green or Red. The separating line defines a boundary on the right side of which all objects are Green and to the left of which all objects are Red. Any new object (white circle) falling to the right is labeled, i.e., classified, as Green (or classified as Red should it fall to the left of the separating line).Support vector machines not only draw a line between two classes, but consider a region about the line of some given width. Here’s an example of what it can look like:# creating line space between -1 to 3.5 xfit = np.linspace(-1, 3.5) # plotting scatter plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring') # plot a line between the different sets of data for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:     yfit = m * xfit + b     plt.plot(xfit, yfit, '-k')     plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',     color='#AAAAAA', alpha=0.4)plt.xlim(-1, 3.5); plt.show()Another scenario, where it is clear that a full separation of the Green and Red objects would require a curve (which is more complex than a line). Classification tasks based on drawing separating lines to distinguish between objects of different class memberships are known as hyperplane classifiers. Support Vector Machines are particularly suited to handle such tasks.The figure below shows the basic idea behind Support Vector Machines. Here you will see that the original objects (left side of the schematic) mapped, are rearranged using a set of mathematical functions called kernels. This process of rearranging objects is known as mapping or transformation. You will notice that the right side of the schematic is linearly separable. All we can do is find an optimal line that will separate red and green objects.What is a hyperplane?The goal of Support Vector Machine is to find the hyperplane which separates these two objects or classes. Let us consider another figure which shows some of the possible hyperplanes which can help in separating or dividing the dataset. It is the choice of the best hyperplane which is also the goal. The best hyperplane is defined by the extent to which a maximum margin is left for both classes. The margin is the distance between the hyperplane and the closest point in the classification.Let us consider two hyperplanes among all and then check the margins represented by M1 and M2. You will notice that margin M1 > M2, so the choice of the hyperplane which separates the best one is the new plane between the green and blue planes.How do we find the right hyperplane?Now, let us represent the new plane by a linear equation as: f(x) = ax + bLet us consider that this equation delivers all values ≥ 1 from the green triangle class and ≤ -1 for the gold star class. The distance of this plane from the closest points in both the classes is at least one; the modulus is one. f(x) ≥ 1 for triangles and f(x) ≤ 1 or |f(x)| = 1 for starThe distance between the hyperplane and the point can be computed using the following equation. M1 = |f(x)| / ||a|| = 1 / ||a||The total margin is 1 / ||a|| + 1 / ||a|| = 2 / ||a|. In order to maximize the separability, we will have to maximize the ||a|| value. This particular value is known as a weight vector. We can minimize the weight value which is a non-linear optimization task. One of the methods is to use the Karush-Kuhn-Tucker (KKT) condition, using the Lagrange multiplier λi.What is a support vector in SVM?Let's take an example of two points between the two attributes X and Y. We need to find a point between these two points that has a maximum distance between these points. This requirement is represented in the graph depicted next. The optimal point is depicted using the red circle.The maximum margin weight vector is parallel to the line from (1, 1) to (2, 3). The weight vector is at (1,2), and this becomes a decision boundary that is halfway between and in perpendicular, that passes through (1.5, 2). So, y = x1 +2x2 − 5.5 and the geometric margin is computed as √5. Following are the steps to compute SVMs: With w = (a, 2a) for the functions of the points (1,1) and (2,3) can be represented as shown here: a + 2a + ω0 = -1 for the point (1,1) 2a + 6a + ω0 = 1 for the point (2,3) The weights can be computed as follows:These are the support vectors:Lastly, the final equation is as follows:Large Margin IntuitionIn logistic regression, the output of linear function is taken and the value is squashed within the range of [0,1] using the sigmoid function. If the value is greater than a threshold value, say 0.5, label 1 is assigned else label 0.  In case of support vector machines, the linear function is taken and if the output is greater than 1 and we identify it with one class and if the output is -1, it is identified with another class. Since the threshold values are changed to 1 and -1 in SVM, we obtain this reinforcement range of values([-1,1]) which acts as margin. Cost Function and Gradient UpdatesIn the SVM algorithm, we maximize the margin between the data points and the hyperplane. The loss function that helps maximize the margin is called the hinge loss.Hinge loss function (function on the left can be represented as a function on the right)   If the predicted value and the actual value are of the same sign, the cost is 0 . If not, we calculate the loss value. We also add a regularization parameter the cost function. The objective of the regularization parameter is to balance the margin maximization and loss. After adding the regularization parameter, the cost functions looks as below.Loss function for SVM  Now that we have the loss function, we take partial derivatives with respect to the weights to find the gradients. Using gradients, we can update our weights.Gradients  When there is no misclassification, i.e our model correctly predicts the class of our data point, we only have to update the gradient from the regularization parameter.Gradient Update — No misclassification  When there is a misclassification, i.e our model makes a mistake on the prediction of the class of our data point, we include the loss along with the regularization parameter to perform gradient update.Gradient Update — Misclassification  Let us start with a code and import the necessary libraries:import pandas as pd  import numpy as np  from sklearn.model_selection import train_test_split  from sklearn.model_selection import cross_val_score, GridSearchCV  from sklearn import metrics  from sklearn.preprocessing import MinMaxScaler  pd.set_option('display.max_columns', None)Read the Wisconsin Breast Cancer dataset using pandas.read_csv function into an object 'data' from the current directorydata = pd.read_csv('wisconsin.csv')After reading the data, we have prepared the data as per requirement. Feature scaling is a method used to standardize the range of independent variables or features of data. The min-max scaling (or min-max normalization) shrinks the range of feature such that the range is in between 0 and 1 (or -1 to 1 if there are negative values).sclr = MinMaxScaler() predictor_sc = sclr.fit_transform(predictor)predictor_sc.shapeSplit the scaled data into train-test split:x_train_sc,x_test_sc, y_train, y_test = train_test_split(predictor_sc, target, test_size = 0.30, random_state=101) print("Scaled train and test split") print("x_train ",x_train_sc.shape) print("x_test ",x_test_sc.shape) print("y_train ",y_train.shape) print("y_test ",y_test.shape)Scaled train and test split x_train  (398, 30) x_test  (171, 30) y_train  (398,) y_test  (171,)But what happens when there is no clear hyperplane? Support Vector Machines can probably help you to find a separating hyperplane but only if it exists. There are certain cases when it is not possible to define a hyperplane, this happens due to noise in the data. Another possible reason can be a non-linear boundary. The first graph below depicts noise and the next one shows a non-linear boundary.There might be cases where there is no possibility to define a hyperplane, which can happen due to noise in the data. In fact, another reason can be a non-linear boundary as well. The following first graph depicts noise and the second one shows a non-linear boundary.For such problems which arise due to noise in the data, the best way is to reduce the margin itself and introduce slack.The non-linear boundary problem can be solved if we introduce a kernel. Some of the kernel functions that can be introduced are mentioned below:A radial basis function is a real-valued function whose value is dependent on the distance between the input and some fixed point. In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms.The RBF kernel on two samples x and x', represented as feature vectors in some input space, is defined as:Applying SVM with default hyperparametersLet us get back to the example and apply SVM after data pre-processsing with default hyperparameters. Linear Kernelfrom sklearn import svm svm2 = svm.SVC(kernel='linear') svm2 SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto', kernel='linear', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False) model2 = svm2.fit(x_train_sc, y_train) y_pred2 = svm2.predict(x_test_sc) print('Accuracy Score’) print(metrics.accuracy_score(y_test,y_pred2))Accuracy Score:0.9707602339181286Gaussian Kernelsvm3 = svm.SVC(kernel='rbf') svm3 SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False) model3 = svm3.fit(x_train_sc, y_train) y_pred3 = svm3.predict(x_test_sc) print('Accuracy Score’) print(metrics.accuracy_score(y_test, y_pred3))Accuracy Score:0.935672514619883Polynomial Kernelsvm4 = svm.SVC(kernel='poly') svm4SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='auto', kernel='poly', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False)model4 = svm4.fit(x_train_sc, y_train) y_pred4 = svm4.predict(x_test_sc) print('Accuracy Score’) print(metrics.accuracy_score(y_test,y_pred4)) Accuracy Score:0.6198830409356725How to tune Parameters of SVM? Kernel: Kernel in support vector machine is responsible for the transformation of the input data into the required format. Some of the kernels used in support vector machines are linear, polynomial and radial basis function (RBF). In order to create a non-linear hyperplane, we use RBF and Polynomial function, and for complex applications, you should use more advanced kernels to separate classes that are nonlinear in nature. With this transformation, you can obtain accurate classifiers. Regularization: Using the Scikit-learn’s C parameters and adjusting we can maintain regularization. C denotes a penalty parameter representing an error or any form of misclassification. This misclassification allows you to understand how much of the error is actually bearable. This helps you nullify the compensation between the misclassified term and the decision boundary. With a smaller C value, you obtain hyperplane of small margin and with a larger C value, hyperplane of larger value is obtained. Gamma: Lower value of Gamma creates a loose fit of the training dataset. On the other hand, a high value of gamma allows the model to get fit more appropriately. A low value of gamma will only provide consideration to the nearby points for the calculation of a separate plane. However, the high value of gamma will consider all the data-points to calculate the final separation line. Do we need to tune parameters always?? You do not need to tune parameter in all cases. There are inbuilt functions in sklearn tool kit which can be used. Tuning HyperparametersThe 'C' and 'gamma' hyperparameterC is the parameter for the soft margin cost function, which controls the influence of each individual support vector. This process involves trading error penalty for stability. Small C tends to emphasize the margin while ignoring the outliers in the training data(Soft Margin), while large C may tend to overfit the training data(Hard Margin). Thus for a very large values we can cause overfitting of the model and for a very small value of C we can cause underfitting.Thus the value of C must be chosen in such a manner that it generalises the unseen data well. The gamma parameter is the inverse of the standard deviation of the RBF kernel (Gaussian function), which is used as a similarity measure between two points. A small gamma value define a Gaussian function with a large variance. In this case, two points can be considered similar even if are far from each other. On the other hand, a large gamma value define a Gaussian function with a small variance and in this case, two points are considered similar just if they are close to each other. Taking kernel as linear and tuning C hyperparameterC_range=list(range(1,26)) acc_score=[] for c in C_range: svc = svm.SVC(kernel='linear', C=c) scores = cross_val_score(svc, predictor_sc, target, cv=10, scoring='accuracy') acc_score.append(scores.mean()) print(acc_score) [0.9772210699161695, 0.9772210699161695, 0.9806995938121164, 0.9824539797770286, 0.9789754558810818, 0.9789452078472042, 0.9806995938121164, 0.9789452078472041, 0.9789452078472041, 0.9789452078472041, 0.9806995938121164, 0.9789452078472041, 0.9789452078472041, 0.9772210699161695, 0.9772210699161695, 0.9772210699161695, 0.9772210699161695, 0.9754666839512574, 0.9754666839512574, 0.9754666839512574, 0.9754666839512574, 0.9754666839512574, 0.9754666839512574, 0.9754666839512574, 0.9754666839512574]Let us visualize the above points:import matplotlib.pyplot as plt %matplotlib inline C_Val_list = list(range(1,26)) plt.plot(C_Val_list,acc_score) plt.xticks(np.arange(0,27,2)) plt.xlabel('Value of C for SVC') plt.ylabel('Cross-Validated Accuracy')From the plot we can see that accuracy has been close to 98% somewhere in between C=4 and C=5 and then it drops.#Taking a close look at the cross-validation accuracy in the range C(4,5) C_range=list(np.arange(4,5,0.2)) acc_score=[] for c in C_range: svc = svm.SVC(kernel='linear', C=c) scores = cross_val_score(svc, predictor_sc, target, cv=10, scoring='accuracy') acc_score.append(scores.mean()) print(acc_score) [0.9824539797770286, 0.9806995938121164, 0.9789754558810818, 0.9789754558810818, 0.9789754558810818] Accuracy score is highest for C=4Taking kernel as gaussian and tuning gamma hyperparametergamma_range=[0.0001,0.001,0.01,0.1,1,10,100] acc_score=[] for g in gamma_range: svc = svm.SVC(kernel='rbf', gamma=g) scores = cross_val_score(svc, predictor_sc, target, cv=10, scoring='accuracy') acc_score.append(scores.mean()) print(acc_score) [0.6274274047186933, 0.6274274047186933, 0.9195035001296346, 0.9561651974764496, 0.9806995938121164, 0.9420026359000951, 0.6274274047186933] Let us visualize the above points: gamma_range=[0.0001,0.001,0.01,0.1,1,10,100]# plotting the value of gamma for SVM versus the cross-validated accuracy plt.plot(gamma_range,acc_score) plt.xlabel('Value of gamma for SVC ') plt.xticks(np.arange(0.0001,100,5)) plt.ylabel('Cross-Validated Accuracy')Text(0,0.5,'Cross-Validated Accuracy')For gamma between 5 and 100 the kernel performs very poorly.Let us take a closer look at the cross-validated accuracy for gamma value in between 0 and 5.gamma_range=list(np.arange(0.1,5,0.1))  acc_score=[] for g in gamma_range:  svc = svm.SVC(kernel='rbf', gamma=g)  scores = cross_val_score(svc, predictor_sc, target, cv=10, scoring='accuracy') acc_score.append(scores.mean())  print(acc_score)[0.9561651974764496, 0.9718952553798289, 0.9754051075965776, 0.9737122979863452, 0.9806995938121164, 0.9806995938121164, 0.9806995938121164, 0.9806995938121164, 0.9806995938121164, 0.9806995938121164, 0.9789754558810818, 0.9754969319851352, 0.9754969319851352, 0.9754969319851352, 0.9754969319851352, 0.9737727940541007, 0.9737727940541007, 0.9737727940541007, 0.9737727940541007, 0.9720184080891883, 0.9720184080891883, 0.9720184080891883, 0.9720184080891883, 0.9720184080891883, 0.9720184080891883, 0.9702326938034741, 0.9702326938034741, 0.9702326938034741, 0.9702326938034741, 0.9702326938034741, 0.9702326938034741, 0.9702326938034741, 0.9702326938034741, 0.9666925935528475, 0.9666925935528475, 0.9684167314838821, 0.9684167314838821, 0.9684167314838821, 0.9701711174487941, 0.9701711174487941, 0.96838540316308, 0.9649068792671333, 0.9649068792671333, 0.9649068792671333, 0.9649068792671333, 0.9649068792671333, 0.9649068792671333, 0.963152493302221, 0.963152493302221] gamma_range=list(np.arange(0.1,5,0.1)) plt.plot(gamma_range,acc_score) plt.xlabel('Value of gamma for SVC ') #plt.xticks(np.arange(0.0001,5,5)) plt.ylabel('Cross-Validated Accuracy') Text(0,0.5,'Cross-Validated Accuracy')The highest cross-validated accuracy for rbf kernel remains constant in between gamma=0.5 and gamma=1Taking polynomial kernel and tuning degree hyperparameterdegree=[2,3,4,5,6] acc_score=[] for d in degree: svc = svm.SVC(kernel='poly', degree=d) scores = cross_val_score(svc, predictor_sc, target, cv=10, scoring='accuracy') acc_score.append(scores.mean()) print(acc_score) [0.8350974418805635, 0.6450652493302222, 0.6274274047186933, 0.6274274047186933, 0.6274274047186933] plt.plot(degree,acc_score) plt.xlabel('degrees for SVC ') plt.ylabel('Cross-Validated Accuracy') Text(0,0.5,'Cross-Validated Accuracy')Score is high for second degree polynomial. There is drop in the accuracy score as degree of polynomial increases.Thus increase in polynomial degree results in high complexity of the model. Advantages and Disadvantages of Support Vector MachineAdvantages of SVMSVM Classifiers offer good accuracy and perform faster prediction compared to Naïve Bayes algorithm. SVM guarantees optimality due to the nature of Convex Optimization, the solution will always be global minimum not a local minimum. SVMcan be access it conveniently, be it from Python or Matlab. SVM can be used for both linearly separable as well as non-linearly separable data. Linearly separable data is the hard margin however, non-linearly separable data poses a soft margin. SVM provides compliance to the semi-supervised learning models as well. It can be implemented in both labelled and unlabelled data. The only thing it requires is a condition to the minimization problem which is known as the Transductive SVM. Feature Mapping used to be complex with respect to computation of the overall training performance of the model. With the help of Kernel Trick, SVM can carry out the feature mapping using simple dot product. SVM works well with a clear margin of separation and with high dimensional space.  Disadvantages of SVM SVM is not at all capable of handling text structures. It leads to bad performance as it results in the loss of sequential information. SVM is not suitable for large datasets because of its high training time and it also takes more time in training compared to Naïve Bayes. SVM works poorly with overlapping classes and is also sensitive to the type of kernel used. In cases where the number of features for each data point exceeds the number of training data samples , the SVM under performs. Applications of SVM in Real WorldSupport vector machines depend on supervised learning algorithms. The main goal of using SVM is to classify unseen data correctly. SVMs can be used to solve various real-world problems: Face detection – SVM can be used to classify parts of the image as a face and non-face and create a square boundary around the face. Text and hypertext categorization – SVM allows text and hypertext categorization for both inductive and transductive models. It uses training data for classification of documents into different categories. It categorizes based on the score generated and then compares with the threshold value. Classification of images – SVMs enhances search accuracy for image classification. In comparison to the traditional query-based searching techniques, SVM provides better accuracy. Bioinformatics – It includes classification of proteins and classification of cancer. SVM is used for identifying the classification of genes, patients on the basis of genes and other biological problems. Protein fold and remote homology detection – SVM algorithms are applied for protein remote homology detection. Handwriting recognition –  SVMs are used widely to recognize handwritten characters.  Generalized predictive control(GPC) – You can use SVM based GPC in order to control chaotic dynamics with useful parameters. Summary In this article, we looked at the machine learning algorithm, Support Vector Machine in detail. We have discussed the concept behind support vector machines, how it works, the process of implementation in Python.  We also looked into how to tune its parameters and make efficient models. Lastly, we came across the advantages and disadvantages of SVM along with various real world applications of support vector machines.We have covered most of the topics related to algorithms in our series of machine learning blogs,click here. If you are inspired by the opportunities provided by machine learning, enrol in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.
Rated 4.0/5 based on 67 customer reviews

Support Vector Machines in Machine Learning

27822
Support Vector Machines in Machine Learning

While many classifiers exist that can classify linearly separable data such as logistic regression, Support Vector Machines can handle highly non-linear problems using a kernel trick which implicitly maps the input vectors to higher-dimensional feature spaces. The transformation rearranges the dataset in such a way that it is then linearly solvable. In this article we are going to look at how SVM works, learn about kernel functions, hyperparameters and pros and cons of SVM along with some of the real life applications of SVM. 

Support Vector Machines (SVMs), also known as support vector networks, are a family of extremely powerful models which use method based learning and can be used in classification and regression problems. They aim at finding decision boundaries that separate observations with differing class memberships. In other words, SVM is a discriminative classifier formally defined by a separating hyperplane.

Method Based Learning

Method Based Learning in Machine Learning

 There are several learning models namely:

  • Association rules based
  • Ensemble method based
  • Deep Learning based
  • Clustering method based
  • Regression Analysis based
  • Bayesian method based
  • Dimensionality reduction based
  • Instance based
  • Kernel method based

Let us understand what Kernel method based learning is all about.

In simple terms, a kernel is a similarity function which is fed into a machine learning algorithm. It accepts two inputs and suggests the similarity. For example, suppose we want to classify images, the input data is a key-value pair (image, label). The image data is taken into consideration, features are computed, and a vector of features are fed into the Machine learning algorithm. But in the case of similarity functions, a kernel function can be defined which internally computes the similarity between images, and then feeds into the learning algorithm along with the images and label data. The outcome of this is a classifier. 

Perceptron frameworks or Support vector machines work with kernels and use vectors only. Here, the machine learning algorithms are expressed as dot products so that kernel functions can be used.

Feature vectors generally prefer kernels. Its ease of computing makes it one of the key reasons, also, feature vectors need more storage space in comparison to dot products. You can write

Machine learning algorithms to use dot products and later map them to use kernels. This completely avoids the usage of feature vectors. This allows us to work with highly complex, efficient-to-compute, and yet high performing kernels effortlessly, without really developing multi-dimensional vectors.

Kernel functions

Let us understand what kernel functions are: The figure shown below represents a 1D function using a simple 1-Dimensional example. Assume that given points are as follows, it will depict a vertical line and no other vertical lines will separate the dataset.

Kernel functions in Machine Learning

Now, if we consider a 2-Dimensional representation, as shown in the figure below, there is a hyperplane (an arbitrary line in 2-Dimensions) which separates red and blue points, which can be separated using Support Vector Machines.

Kernel functions In Machine Learning

As we keep increasing dimensional space, the need to be able to separate data will eventually decrease. This mapping, x -> (x, x2), is called the kernel function. In case of growing dimensional space, the computations become more complex and kernel trick needs to be applied to address these computations cheaply. 

What is Support Vector Machine? 

Support Vector Machine (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However,  it is mostly used in classification problems. In this algorithm, each data is plotted in n-dimensional space (where n is the number of features you have) with the value of each feature being the value of a particular coordinate. After that, we perform classification by locating the hyperplane which differentiates both the classes.

Let us create a dataset to understand support vector classification:

# importing scikit learn with make_blobs 
from sklearn.datasets.samples_generator import make_blobs
# creating datasets X containing n_samples
# Y containing two classes
X, Y = make_blobs(n_samples=500, centers=2,
       random_state=0, cluster_std=0.40)
# plotting scatters
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring');
plt.show()

Support Vector Machine Graph in Machine Learning

Support vector machine is based on the concept of decision planes that define decision boundaries. A decision plane is one that separates between a set of objects with different class memberships. For example, in the figure mentioned below, there are objects which belong to either class Green or Red. The separating line defines a boundary on the right side of which all objects are Green and to the left of which all objects are Red. Any new object (white circle) falling to the right is labeled, i.e., classified, as Green (or classified as Red should it fall to the left of the separating line).

Support vector machine Boundaries in machine Learning

Support vector machines not only draw a line between two classes, but consider a region about the line of some given width. Here’s an example of what it can look like:

# creating line space between -1 to 3.5 
xfit = np.linspace(-1, 3.5) 
# plotting scatter 
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='spring')

# plot a line between the different sets of data 
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]: 
    yfit = m * xfit + b 
    plt.plot(xfit, yfit, '-k') 
    plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', 
    color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
plt.show()

Support vector machine Graph in Machine Learning

Another scenario, where it is clear that a full separation of the Green and Red objects would require a curve (which is more complex than a line). Classification tasks based on drawing separating lines to distinguish between objects of different class memberships are known as hyperplane classifiers. Support Vector Machines are particularly suited to handle such tasks.

hyperplane classifiers in Machine Learning

The figure below shows the basic idea behind Support Vector Machines. Here you will see that the original objects (left side of the schematic) mapped, are rearranged using a set of mathematical functions called kernels. This process of rearranging objects is known as mapping or transformation. You will notice that the right side of the schematic is linearly separable. All we can do is find an optimal line that will separate red and green objects.

 kernels in Machine Learning

What is a hyperplane?

Hyperplane in Machine Learning

The goal of Support Vector Machine is to find the hyperplane which separates these two objects or classes. Let us consider another figure which shows some of the possible hyperplanes which can help in separating or dividing the dataset. It is the choice of the best hyperplane which is also the goal. The best hyperplane is defined by the extent to which a maximum margin is left for both classes. The margin is the distance between the hyperplane and the closest point in the classification.

Hyperplane in Machine Learning

Let us consider two hyperplanes among all and then check the margins represented by M1 and M2. You will notice that margin M1 > M2, so the choice of the hyperplane which separates the best one is the new plane between the green and blue planes.

Hyperplane in Machine Learning

How do we find the right hyperplane?

Now, let us represent the new plane by a linear equation as: 

f(x) = ax + b

Let us consider that this equation delivers all values ≥ 1 from the green triangle class and ≤ -1 for the gold star class. The distance of this plane from the closest points in both the classes is at least one; the modulus is one. 

f(x) ≥ 1 for triangles and f(x) ≤ 1 or |f(x)| = 1 for star

The distance between the hyperplane and the point can be computed using the following equation. 

M1 = |f(x)| / ||a|| = 1 / ||a||

The total margin is 1 / ||a|| + 1 / ||a|| = 2 / ||a|

In order to maximize the separability, we will have to maximize the ||a|| value. This particular value is known as a weight vector. We can minimize the weight value which is a non-linear optimization task. One of the methods is to use the Karush-Kuhn-Tucker (KKT) condition, using the Lagrange multiplier λi.

Hyperplane Equation in Machine Learning

Hyperplane Graph in Machine Learning

What is a support vector in SVM?

support vector in SVM

Let's take an example of two points between the two attributes X and Y. We need to find a point between these two points that has a maximum distance between these points. This requirement is represented in the graph depicted next. The optimal point is depicted using the red circle.

support vector in SVM

The maximum margin weight vector is parallel to the line from (1, 1) to (2, 3). The weight vector is at (1,2), and this becomes a decision boundary that is halfway between and in perpendicular, that passes through (1.5, 2)

So, y = x1 +2x2 − 5.5 and the geometric margin is computed as √5

Following are the steps to compute SVMs: 

With w = (a, 2a) for the functions of the points (1,1) and (2,3) can be represented as shown here: 

a + 2a + ω0 = -1 for the point (1,1) 

2a + 6a + ω0 = 1 for the point (2,3) 

The weights can be computed as follows:

support vector Equations in SVM

support vector Equations in SVM

These are the support vectors:

support vectors in Machine Learning

Lastly, the final equation is as follows:

support vector Final Equations in SVM

Large Margin Intuition

In logistic regression, the output of linear function is taken and the value is squashed within the range of [0,1] using the sigmoid function. If the value is greater than a threshold value, say 0.5, label 1 is assigned else label 0.  

In case of support vector machines, the linear function is taken and if the output is greater than 1 and we identify it with one class and if the output is -1, it is identified with another class. Since the threshold values are changed to 1 and -1 in SVM, we obtain this reinforcement range of values([-1,1]) which acts as margin. 

Cost Function and Gradient Updates

In the SVM algorithm, we maximize the margin between the data points and the hyperplane. The loss function that helps maximize the margin is called the hinge loss.

Cost Function and Gradient Updates in Machine LearningHinge loss function (function on the left can be represented as a function on the right)   If the predicted value and the actual value are of the same sign, the cost is 0 . If not, we calculate the loss value. We also add a regularization parameter the cost function. The objective of the regularization parameter is to balance the margin maximization and loss. After adding the regularization parameter, the cost functions looks as below.

Cost Function and Gradient Updates in Machine LearningLoss function for SVM  Now that we have the loss function, we take partial derivatives with respect to the weights to find the gradients. Using gradients, we can update our weights.

Cost Function and Gradient Updates in Machine LearningGradients  When there is no misclassification, i.e our model correctly predicts the class of our data point, we only have to update the gradient from the regularization parameter.

Cost Function and Gradient Updates in Machine LearningGradient Update — No misclassification  When there is a misclassification, i.e our model makes a mistake on the prediction of the class of our data point, we include the loss along with the regularization parameter to perform gradient update.

Gradient Update — Misclassification  Let us start with a code and import the necessary libraries:

import pandas as pd 
import numpy as np 
from sklearn.model_selection import train_test_split 
from sklearn.model_selection import cross_val_score, GridSearchCV 
from sklearn import metrics 
from sklearn.preprocessing import MinMaxScaler 
pd.set_option('display.max_columns', None)

Read the Wisconsin Breast Cancer dataset using pandas.read_csv function into an object 'data' from the current directory

data = pd.read_csv('wisconsin.csv')

After reading the data, we have prepared the data as per requirement. Feature scaling is a method used to standardize the range of independent variables or features of data. The min-max scaling (or min-max normalization) shrinks the range of feature such that the range is in between 0 and 1 (or -1 to 1 if there are negative values).

sclr = MinMaxScaler()
predictor_sc = sclr.fit_transform(predictor)predictor_sc.shape

Split the scaled data into train-test split:

x_train_sc,x_test_sc, y_train, y_test = train_test_split(predictor_sc, target, test_size = 0.30, random_state=101)
print("Scaled train and test split")
print("x_train ",x_train_sc.shape)
print("x_test ",x_test_sc.shape)
print("y_train ",y_train.shape)
print("y_test ",y_test.shape)
Scaled train and test split
x_train  (398, 30)
x_test  (171, 30)
y_train  (398,)
y_test  (171,)

But what happens when there is no clear hyperplane? 

Support Vector Machines can probably help you to find a separating hyperplane but only if it exists. There are certain cases when it is not possible to define a hyperplane, this happens due to noise in the data. Another possible reason can be a non-linear boundary. The first graph below depicts noise and the next one shows a non-linear boundary.

There might be cases where there is no possibility to define a hyperplane, which can happen due to noise in the data. In fact, another reason can be a non-linear boundary as well. The following first graph depicts noise and the second one shows a non-linear boundary.

Hyperplane Graph in Machine Learning

Hyperplane Graph in Machine Learning

For such problems which arise due to noise in the data, the best way is to reduce the margin itself and introduce slack.

Hyperplane Data in Machine Learning

The non-linear boundary problem can be solved if we introduce a kernel. Some of the kernel functions that can be introduced are mentioned below:

non-linear boundary problem in Machine Learning

A radial basis function is a real-valued function whose value is dependent on the distance between the input and some fixed point. In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms.

The RBF kernel on two samples x and x', represented as feature vectors in some input space, is defined as:

RBF kernel Equation in Machine Learning

Applying SVM with default hyperparameters

Let us get back to the example and apply SVM after data pre-processsing with default hyperparameters. 

Linear Kernel

from sklearn import svm 
svm2 = svm.SVC(kernel='linear') 
svm2

SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, 
decision_function_shape='ovr', degree=3, gamma='auto', kernel='linear', 
max_iter=-1, probability=False, random_state=None, shrinking=True, 
tol=0.001, verbose=False)

model2 = svm2.fit(x_train_sc, y_train) 
y_pred2 = svm2.predict(x_test_sc) 
print('Accuracy Score’) 
print(metrics.accuracy_score(y_test,y_pred2))
Accuracy Score:0.9707602339181286

Gaussian Kernel

svm3 = svm.SVC(kernel='rbf') 
svm3 
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf', 
max_iter=-1, probability=False, random_state=None, shrinking=True, 
tol=0.001, verbose=False) 
model3 = svm3.fit(x_train_sc, y_train) 
y_pred3 = svm3.predict(x_test_sc) 
print('Accuracy Score’) 
print(metrics.accuracy_score(y_test, y_pred3))
Accuracy Score:0.935672514619883

Polynomial Kernel

svm4 = svm.SVC(kernel='poly') 
svm4
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, 
decision_function_shape='ovr', degree=3, gamma='auto', kernel='poly', 
max_iter=-1, probability=False, random_state=None, shrinking=True, 
tol=0.001, verbose=False)
model4 = svm4.fit(x_train_sc, y_train) 
y_pred4 = svm4.predict(x_test_sc) 
print('Accuracy Score’) 
print(metrics.accuracy_score(y_test,y_pred4)) 
Accuracy Score:0.6198830409356725

How to tune Parameters of SVM? 

Kernel: Kernel in support vector machine is responsible for the transformation of the input data into the required format. Some of the kernels used in support vector machines are linear, polynomial and radial basis function (RBF). In order to create a non-linear hyperplane, we use RBF and Polynomial function, and for complex applications, you should use more advanced kernels to separate classes that are nonlinear in nature. With this transformation, you can obtain accurate classifiers. 

Regularization: Using the Scikit-learn’s C parameters and adjusting we can maintain regularization. C denotes a penalty parameter representing an error or any form of misclassification. This misclassification allows you to understand how much of the error is actually bearable. This helps you nullify the compensation between the misclassified term and the decision boundary. With a smaller C value, you obtain hyperplane of small margin and with a larger C value, hyperplane of larger value is obtained. 

Gamma: Lower value of Gamma creates a loose fit of the training dataset. On the other hand, a high value of gamma allows the model to get fit more appropriately. A low value of gamma will only provide consideration to the nearby points for the calculation of a separate plane. However, the high value of gamma will consider all the data-points to calculate the final separation line. 

Do we need to tune parameters always?? 

You do not need to tune parameter in all cases. There are inbuilt functions in sklearn tool kit which can be used. 

Tuning Hyperparameters

The 'C' and 'gamma' hyperparameter

C is the parameter for the soft margin cost function, which controls the influence of each individual support vector. This process involves trading error penalty for stability. Small C tends to emphasize the margin while ignoring the outliers in the training data(Soft Margin), while large C may tend to overfit the training data(Hard Margin). Thus for a very large values we can cause overfitting of the model and for a very small value of C we can cause underfitting.Thus the value of C must be chosen in such a manner that it generalises the unseen data well. 

The gamma parameter is the inverse of the standard deviation of the RBF kernel (Gaussian function), which is used as a similarity measure between two points. A small gamma value define a Gaussian function with a large variance. In this case, two points can be considered similar even if are far from each other. On the other hand, a large gamma value define a Gaussian function with a small variance and in this case, two points are considered similar just if they are close to each other. 

Taking kernel as linear and tuning C hyperparameter

C_range=list(range(1,26))
acc_score=[]
for c in C_range:
svc = svm.SVC(kernel='linear', C=c)
scores = cross_val_score(svc, predictor_sc, target, cv=10,
scoring='accuracy')
acc_score.append(scores.mean())
print(acc_score)

[0.9772210699161695, 0.9772210699161695, 0.9806995938121164,
0.9824539797770286, 0.9789754558810818, 0.9789452078472042,
0.9806995938121164, 0.9789452078472041, 0.9789452078472041,
0.9789452078472041, 0.9806995938121164, 0.9789452078472041,
0.9789452078472041, 0.9772210699161695, 0.9772210699161695,
0.9772210699161695, 0.9772210699161695, 0.9754666839512574,
0.9754666839512574, 0.9754666839512574, 0.9754666839512574,
0.9754666839512574, 0.9754666839512574, 0.9754666839512574,
0.9754666839512574]

Let us visualize the above points:

import matplotlib.pyplot as plt
%matplotlib inline
C_Val_list = list(range(1,26))

plt.plot(C_Val_list,acc_score)
plt.xticks(np.arange(0,27,2))
plt.xlabel('Value of C for SVC')
plt.ylabel('Cross-Validated Accuracy')

From the plot we can see that accuracy has been close to 98% somewhere in between C=4 and C=5 and then it drops.

#Taking a close look at the cross-validation accuracy in the range C(4,5) 
C_range=list(np.arange(4,5,0.2)) 
acc_score=[]
for c in C_range:
svc = svm.SVC(kernel='linear', C=c)
scores = cross_val_score(svc, predictor_sc, target, cv=10,
scoring='accuracy')
acc_score.append(scores.mean())
print(acc_score) 

[0.9824539797770286, 0.9806995938121164, 0.9789754558810818,
0.9789754558810818, 0.9789754558810818] 

Accuracy score is highest for C=4

Taking kernel as gaussian and tuning gamma hyperparameter

gamma_range=[0.0001,0.001,0.01,0.1,1,10,100] 
acc_score=[] 
for g in gamma_range: 
svc = svm.SVC(kernel='rbf', gamma=g) 
scores = cross_val_score(svc, predictor_sc, target, cv=10,
scoring='accuracy')
acc_score.append(scores.mean()) 
print(acc_score)

[0.6274274047186933, 0.6274274047186933, 0.9195035001296346,
0.9561651974764496, 0.9806995938121164, 0.9420026359000951,
0.6274274047186933]

Let us visualize the above points: 
gamma_range=[0.0001,0.001,0.01,0.1,1,10,100]
# plotting the value of gamma for SVM versus the cross-validated accuracy 
plt.plot(gamma_range,acc_score) 
plt.xlabel('Value of gamma for SVC ') 
plt.xticks(np.arange(0.0001,100,5)) 
plt.ylabel('Cross-Validated Accuracy')
Text(0,0.5,'Cross-Validated Accuracy')

For gamma between 5 and 100 the kernel performs very poorly.
Let us take a closer look at the cross-validated accuracy for gamma value in between 0 and 5.

gamma_range=list(np.arange(0.1,5,0.1)) 
acc_score=[]
for g in gamma_range: 
svc = svm.SVC(kernel='rbf', gamma=g) 
scores = cross_val_score(svc, predictor_sc, target, cv=10,
scoring='accuracy')
acc_score.append(scores.mean()) 
print(acc_score)


[0.9561651974764496, 0.9718952553798289, 0.9754051075965776, 
0.9737122979863452, 0.9806995938121164, 0.9806995938121164, 
0.9806995938121164, 0.9806995938121164, 0.9806995938121164, 
0.9806995938121164, 0.9789754558810818, 0.9754969319851352, 
0.9754969319851352, 0.9754969319851352, 0.9754969319851352, 
0.9737727940541007, 0.9737727940541007, 0.9737727940541007, 
0.9737727940541007, 0.9720184080891883, 0.9720184080891883, 
0.9720184080891883, 0.9720184080891883, 0.9720184080891883, 
0.9720184080891883, 0.9702326938034741, 0.9702326938034741, 
0.9702326938034741, 0.9702326938034741, 0.9702326938034741, 
0.9702326938034741, 0.9702326938034741, 0.9702326938034741, 
0.9666925935528475, 0.9666925935528475, 0.9684167314838821, 
0.9684167314838821, 0.9684167314838821, 0.9701711174487941, 
0.9701711174487941, 0.96838540316308, 0.9649068792671333, 
0.9649068792671333, 0.9649068792671333, 0.9649068792671333, 
0.9649068792671333, 0.9649068792671333, 0.963152493302221, 
0.963152493302221]

gamma_range=list(np.arange(0.1,5,0.1))

plt.plot(gamma_range,acc_score)
plt.xlabel('Value of gamma for SVC ')
#plt.xticks(np.arange(0.0001,5,5))
plt.ylabel('Cross-Validated Accuracy')

Text(0,0.5,'Cross-Validated Accuracy')

The highest cross-validated accuracy for rbf kernel remains constant in between gamma=0.5 and gamma=1

Taking polynomial kernel and tuning degree hyperparameter

degree=[2,3,4,5,6] 
acc_score=[]
for d in degree:
svc = svm.SVC(kernel='poly', degree=d)
scores = cross_val_score(svc, predictor_sc, target, cv=10,
scoring='accuracy')
acc_score.append(scores.mean()) 
print(acc_score)
 
[0.8350974418805635, 0.6450652493302222, 0.6274274047186933,
0.6274274047186933, 0.6274274047186933]

plt.plot(degree,acc_score) 
plt.xlabel('degrees for SVC ') 
plt.ylabel('Cross-Validated Accuracy')

Text(0,0.5,'Cross-Validated Accuracy')

Score is high for second degree polynomial. There is drop in the accuracy score as degree of polynomial increases.Thus increase in polynomial degree results in high complexity of the model. 

Advantages and Disadvantages of Support Vector Machine

Advantages of SVM

  • SVM Classifiers offer good accuracy and perform faster prediction compared to Naïve Bayes algorithm. 
  • SVM guarantees optimality due to the nature of Convex Optimization, the solution will always be global minimum not a local minimum. 
  • SVMcan be access it conveniently, be it from Python or Matlab. 
  • SVM can be used for both linearly separable as well as non-linearly separable data. Linearly separable data is the hard margin however, non-linearly separable data poses a soft margin. 
  • SVM provides compliance to the semi-supervised learning models as well. It can be implemented in both labelled and unlabelled data. The only thing it requires is a condition to the minimization problem which is known as the Transductive SVM. 
  • Feature Mapping used to be complex with respect to computation of the overall training performance of the model. With the help of Kernel Trick, SVM can carry out the feature mapping using simple dot product. 
  • SVM works well with a clear margin of separation and with high dimensional space.  

Disadvantages of SVM 

  • SVM is not at all capable of handling text structures. It leads to bad performance as it results in the loss of sequential information. 
  • SVM is not suitable for large datasets because of its high training time and it also takes more time in training compared to Naïve Bayes. 
  • SVM works poorly with overlapping classes and is also sensitive to the type of kernel used. 
  • In cases where the number of features for each data point exceeds the number of training data samples , the SVM under performs. 

Applications of SVM in Real World

Support vector machines depend on supervised learning algorithms. The main goal of using SVM is to classify unseen data correctly. SVMs can be used to solve various real-world problems: 

  • Face detection – SVM can be used to classify parts of the image as a face and non-face and create a square boundary around the face. 
  • Text and hypertext categorization – SVM allows text and hypertext categorization for both inductive and transductive models. It uses training data for classification of documents into different categories. It categorizes based on the score generated and then compares with the threshold value. 
  • Classification of images – SVMs enhances search accuracy for image classification. In comparison to the traditional query-based searching techniques, SVM provides better accuracy. 
  • Bioinformatics – It includes classification of proteins and classification of cancer. SVM is used for identifying the classification of genes, patients on the basis of genes and other biological problems. 
  • Protein fold and remote homology detection – SVM algorithms are applied for protein remote homology detection. 
  • Handwriting recognition –  SVMs are used widely to recognize handwritten characters.  
  • Generalized predictive control(GPC) – You can use SVM based GPC in order to control chaotic dynamics with useful parameters. 

Summary 

In this article, we looked at the machine learning algorithm, Support Vector Machine in detail. We have discussed the concept behind support vector machines, how it works, the process of implementation in Python.  We also looked into how to tune its parameters and make efficient models. Lastly, we came across the advantages and disadvantages of SVM along with various real world applications of support vector machines.

We have covered most of the topics related to algorithms in our series of machine learning blogs,click here. If you are inspired by the opportunities provided by machine learning, enrol in our  Data Science and Machine Learning Courses for more lucrative career options in this landscape.

Priyankur

Priyankur Sarkar

Data Science Enthusiast

Priyankur Sarkar loves to play with data and get insightful results out of it, then turn those data insights and results in business growth. He is an electronics engineer with a versatile experience as an individual contributor and leading teams, and has actively worked towards building Machine Learning capabilities for organizations.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

How to Become a Dependable Data Scientist

The job profile of the data scientist looks set to retain its title as the 21st century’s hottest job through 2020 and beyond. A recent study by IBM found that the demand for data scientist will soar by a whopping 28% in 2020. For aspiring data professionals, this is good news as it means an abundance of data science opportunities. Check out this infographic to find out what you can do to capitalize on the growing opportunity in data science.
Rated 4.5/5 based on 12 customer reviews
How to Become a Dependable Data Scientist

The job profile of the data scientist looks set to... Read More

Essential Skills to Become a Data Scientist

The demand for Data Science professionals is now at an all-time high. There are companies in virtually every industry looking to extract the most value from the heaps of information generated on a daily basis.With the trend for Data Science catching up like never before, organizations are making complete use of their internal data assets to further examine the integration of hundreds of third-party data sources. What is crucial here is the role of the data scientists.Not very long back, the teams playing the key role of working on the data always found their places in the back rooms of multifold IT organizations. The teams though sitting on the backseat would help in steering the various corporate systems with the required data that acted as the fuel to keep the activities running. The critical database tasks performed by the teams responsible allowed corporate executives to report on operations activities and deliver financial results.When you take up a career in Data Science, your previous experience or skills do not matter. As a matter of fact, you would need a whole new range of skills to pursue a career in Data Science. Below are the skills required to become a top dog in Data Science.What should Data Scientists knowData scientists are expected to have knowledge and expertise in the following domains:The areas arch over dozens of languages, frameworks, and technologies that data scientists need to learn. Data scientists should always have the curiosity to amass more knowledge in their domain so that they stay relevant in this dynamic field.The world of Data Science demands certain important attributes and skills, according to IT leaders, industry analysts, data scientists, and others.How to become a Data Scientist?A majority of Data scientists already have a Master’s degree. If Master’s degree does not quench their thirst for more degrees, some even go on to acquire PhD degrees. Mind you, there are exceptions too. It isn’t mandatory that you should be an expert in a particular subject to become a Data Scientist. You could become one even with a qualification in Computer Science, Physical Sciences, Natural Sciences, Statistics or even Social Sciences. However, a degree in Mathematics and Statistics is always an added benefit for enhanced understanding of the concepts.Qualifying with a degree is not the end of the requirements. Brush up your skills by taking online lessons in a special skill set of your choice — get certified on how to use Hadoop, Big Data or R. You can also choose to enroll yourself for a Postgraduate degree in the field of Data Science, Mathematics or any other related field.Remember, learning does not end with earning a degree or certification. You need to practice what you learned — blog and share your knowledge, build an app and explore other avenues and applications of data.The Data Scientists of the modern world have a major role to play in businesses across the globe. They have the ability to extract useful insights from vast amounts of raw data using sophisticated techniques. The business acumen of the Data Scientists help a big deal in predicting what lies ahead for enterprises. The models that the Data Scientists create also bring out measures to mitigate potential threats if any.Take up organizational challenges with ABCDE skillsetAs a Data Scientist, you may have to face challenges while working on projects and finding solutions to problems.A = AnalyticsIf you are a Data Scientist, you are expected not just to study the data and identify the right tools and techniques; you need to have your answers ready to all the questions that come across while you are strategizing on working on a solution with or without a business model.B = Business AcumenOrganizations vouch for candidates with strong business acumen. As a Data Scientist, you are expected to showcase your skills in a way that will make the organization stand one step ahead of the competition. Undertaking a project and working on it is not the end of the path scaled by you. You need to understand and be able to make others understand how your business models influence business outcomes and how the outcomes will prove beneficial to the organization.C = CodingAnd a Data Scientist is expected to be adept at coding too. You may encounter technical issues where you need to sit and work on codes. If you know how to code, it will make you further versatile in confidently assisting your team.D = DomainThe world does not expect Data Scientists to be perfect with knowledge of all domains. However, it is always assumed that a Data Scientist has know-how of various industrial operations. Reading helps as a plus point. You can gain knowledge in various domains by reading the resources online.E = ExplainTo be a successful Data Scientist, you should be able to explain the problem you are faced with to figure out a solution to the problem and share it with the relevant stakeholders. You need to create a difference in the way you explain without leaving any communication gaps.The Important Skills for a Data ScientistLet us now understand the important skills to become an expert Data Scientist – all the skills that go in, to become one. The skills are as follows:Critical thinkingCodingMathML, DL, AICommunication1. Critical thinkingData scientists need to keep their brains racing with critical thinking. They should be able to apply the objective analysis of facts when faced with a complex problem. Upon reaching a logical analysis, a data scientist should formulate opinions or render judgments.Data scientists are counted upon for their understanding of complex business problems and the risks involved with decision-making. Before they plunge into the process of analysis and decision-making, data scientists are required to come up with a 'model' or 'abstract' on what is critical to coming up with the solution to a problem. Data scientists should be able to determine the factors that are extraneous and can be ignored while churning out a solution to a complex business problem.According to Jeffry Nimeroff, CIO at Zeta Global, which provides a cloud-based marketing platform – A data scientist needs to have experience but also have the ability to suspend belief...Before arriving at a solution, it is very important for a Data Scientist to be very clear on what is being expected and if the expected solution can be arrived at. It is only with experience that your intuition works stronger. Experience brings in benefits.If you are a novice and a problem is posed in front of you; all that the one who put the problem in front of you would get is a wide-eyed expression, perhaps. Instead, if you have hands-on experience of working with complex problems no matter what, you will step back, look behind at your experience, draw some inference from multiple points of view and try assessing the problem that is put forth.In simple steps, critical thinking involves the following steps:a. Describe the problem posed in front of you.b. Analyse the arguments involved – The IFs and BUTs.c. Evaluate the significance of the decisions being made and the successes or failures thereafter.2. CodingHandling a complex task might at times call for the execution of a chain of programming tasks. So, if you are a data scientist, you should know how to go about writing code. It does not stop at just writing the code; the code should be executable and should be crucial in helping you find a solution to a complex business problem.In the present scenario, Data Scientists are more inclined towards learning and becoming an expert with Python as the language of choice. There is a substantial crowd following R as well. Scala, Clojure, Java and Octave are a few other languages that find prominence too.Consider the following aspects to be a successful Data Scientist that can dab with programming skills –a) You need to deal with humongous volumes of data.b) Working with real-time data should be like a cakewalk for you.c) You need to hop around cloud computing and work your way with statistical models like the ones shown below:Different Statistical ModelsRegressionOptimizationClusteringDecision treesRandom forestsData scientists are expected to understand and have the ability to code in a bundle of languages – Python, C++ or Java.Gaining the knack to code helps Data Scientists; however, this is not the end requirement. A Data Scientist can always be surrounded by people who code.3. MathIf you have never liked Mathematics as a subject or are not proficient in Mathematics, Data Science is probably not the right career choice for you.You might own an organization or you might even be representing it; the fact is while you engage with your clients, you might have to look into many disparate issues. To deal with the issues that lay in front of you, you will be required to develop complex financial or operational models. To finally be able to build a worthy model, you will end up pulling chunks from large volumes of data. This is where Mathematics helps you.If you have the expertise in Mathematics, building statistical models is easier. Statistical models further help in developing or switching over to key business strategies. With skills in both Mathematics and Statistics, you can get moving in the world of Data Science. Spell the mantra of Mathematics and Statistics onto your lamp of Data Science, lo and behold you can be the genie giving way to the best solutions to the most complex problems.4. Machine learning, Deep Learning, AIData Science overlaps with the fields of Machine Learning, Deep Learning and AI.There is an increase in the way we work with computers, we now have enhanced connectivity; a large amount of data is being collected and industries make use of this data and are moving extremely fast.AI and deep learning may not show up in the requirements of job postings; yet, if you have AI and deep learning skills, you end up eating the big pie.A data scientist needs to be hawk-eyed and alert to the changes in the curve while research is in progress to come up with the best methodology to a problem. Coming up with a model might not be the end. A Data Scientist must be clear as to when to apply which practice to solve a problem without making it more complex.Data scientists need to understand the depth of problems before finding solutions. A data scientist need not go elsewhere to study the problems; all that is there in the data fetched is what is needed to bring out the best solution.A data scientist should be aware of the computational costs involved in building an environment and the following system boundary conditions:a. Interpretabilityb. Latencyc. BandwidthStudying a customer can act as a major plus point for both a data scientist and an organization… This helps in understanding what technology to apply.No matter how generations advance with the use of automated tools and open source is readily available, statistical skills are considered the much-needed add-ons for a data scientist.Understanding statistics is not an easy job; a data scientist needs to be competent to comprehend the assumptions made by the various tools and software.Experts have put forth a few important requisites for data scientists to make the best use of their models:Data scientists need to be handy with proper data interpretation techniques and ought to understand –a. the various functional interfaces to the machine learning algorithmsb. the statistics within the methodsIf you are a data scientist, try dabbing your profile with colours of computer science skills. You must be proficient in working with the keyboard and have a sound knowledge of fundamentals in software engineering.5. CommunicationCommunication and technology show a cycle of operations wherein, there is an integration between people, applications, systems, and data. Data science does not stand separate in this. Working with Data Science is no different. As a Data Scientist, you should be able to communicate with various stakeholders. Data plays a key attribute in the wheel of communication.Communication in Data Science ropes in the ‘storytelling’ ability. This helps you translate a solution you have arrived at into action or intervention that you have put in the pipeline. As a Data Scientist, you should be adept at knitting with the data you have extracted and communicated it clearly to your stakeholders.What does a data scientist communicate to the stakeholders?The benefits of dataThe technology and the computational costs involved in the process of extracting and making use of the dataThe challenges posed in the form of data quality, privacy, and confidentialityA Data Scientist also needs to keep an eye on the wide horizons for better prospects. The organization can be shown a map highlighting other areas of interest that can prove beneficial.If you are a Data Scientist with different feathers in your cap, one being that of a good communicator, you should be able to change a complex form of technical information to a simple and compact form before you present it to the various stakeholders. The information should highlight the challenges, the details of the data, the criteria for success and the anticipated results.If you want to excel in the field of Data Science, you must have an inquisitive bent of mind. The more you ask questions, the more information you gather, the easier it is to come up with paramount business models.6. Data architectureLet us draw some inference from the construction of a building and the role of an architect. Architects have the most knowledge of how the different blocks of buildings can go together and how the different pillars for a block make a strong support system. Like how architects manage and coordinate the entire construction process, so do the Data Scientists while building business models.A Data Scientist needs to understand all that happens to the data from the inception level to when it becomes a model and further until a decision is made based on the model.Not understanding the data architecture can have a tremendous impact on the assumptions made in the process and the decisions arrived at. If a Data Scientist is not familiar with the data architecture, it may lead to the organization taking wrong decisions leading to unexpected and unfavourable results.A slight change within the architecture might lead to situations getting worse for all the involved stakeholders.7. Risk analysis, process improvement, systems engineeringA Data Scientist with sharp business acumen should have the ability to analyse business risks, suggest improvements if any and facilitate further changes in various business processes. As a Data Scientist, you should understand how systems engineering works.If you want to be a Data Scientist and have sharp risk analysis, process improvement and systems engineering skills, you can set yourself for a smooth sail in this vast sea of Data Science.And, rememberYou will no more be a Data Scientist if you stop following scientific theories… After all, Data Science in itself is a major breakthrough in the field of Science.It is always recommended to analyse all the risks that may confront a business before embarking on a journey of model development. This helps in mitigating risks that an organization may have to encounter later. For a smooth business flow, a Data Scientist should also have the nature to probe into the strategies of the various stakeholders and the problems encountered by customers.A Data Scientist should be able to get the picture of the prevailing risks or the various systems that can have a whopping impact on the data or if a model can lead to positive fruition in the form of customer satisfaction.8. Problem-solving and strong business acumenData scientists are not very different when compared to the commoners. We can say this on the lines of problem-solving. The problem solving traits are inherent in every human being. What makes a data scientist stand apart is very good problem-solving skills. We come across complex problems even in everyday situations. How we differ in solving problems is in the perspectives that we apply. Understanding and analyzing before moving on to actually solving the problems by pulling out all the tools in practice is what Data Scientists are good at.The approach that a Data Scientist takes to solve a problem reaps more success than failure. With their approach, they bring critical thinking to the forefront.  Finding a Data Scientist with skill sets at variance is a problem faced by most of the employers.Technical Skills for a Data ScientistWhen the employers are on a hunt to trap the best, they look out for specialization in languages, libraries, and expertise in tech tools. If a candidate comes in with experience, it helps in boosting the profile.Let us see some very important technical skills:PythonRSQLHadoop/Apache SparkJava/SASTableauLet us briefly understand how these languages are in demand.PythonPython is one of the most in-demand languages. This has gained immense popularity as an open-source language. It is widely used both by beginners and experts. Data Scientists need to have Python as one of the primary languages in their kit.RR is altogether a new programming language for statisticians. Anyone with a mathematical bent of mind can learn it. Nevertheless, if you do not appreciate the nuances of Mathematics then it’s difficult to understand R. This never means that you cannot learn it, but without having that mathematical creativity, you cannot harness the power of it.SQLStructured Query Language or SQL is also highly in demand. The language helps in interacting with relational databases. Though it is not of much prominence yet, with a know-how in SQL you can gain a stand in the job market.Hadoop & SparkBoth Hadoop and Spark are open source tools from Apache for big data.Apache Hadoop is an open source software platform. Apache Hadoop helps when you have large data sets on computer clusters built from commodity hardware and you find it difficult to store and process the data sets.Apache Spark is a lightning-fast cluster computing and data processing engine designed for fast computation. It comes with a bunch of development APIs. It supports data workers with efficient execution of streaming, machine learning or SQL workloads.Java & SASWe also have Java and SAS joining the league of languages. These are in-demand languages by large players. Employers offer whopping packages to candidates with expertise in Java and SAS.TableauTableau joins the list as an analytics platform and visualization tool. The tool is powerful and user-friendly. The public version of the tool is available for free. If you wish to keep your data private, you have to consider the costs involved too.Easy tips for a Data ScientistLet us see the in-demand skill set for a Data Scientist in brief.a. A Data Scientist should have the acumen to handle data processing and go about setting models that will help various business processes.b. A Data Scientist should understand the depth of a business problem and the structure of the data that will be used in the process of solving it.c. A Data Scientist should always be ready with an explanation on how the created business models work; even the minute details count.A majority of the crowd out there is good at Maths, Statistics, Engineering or other related subjects. However, when interviewed, they may not show the required traits and when recruited may fail to shine in their performance levels. Sometimes the recruitment process to hire a Data Scientist gets so tedious that employers end up searching with lanterns even in broad daylight. Further, the graphical representation below shows some smart tips for smart Data Scientists.Smart tips for a Data ScientistWhat employers seek the most from Data Scientists?Let us now throw some light into what employers seek the most from Data Scientists:a. A strong sense of analysisb. Machine learning is at the core of what is sought from Data Scientists.c. A Data Scientist should infer and refer to data that has been in practice and will be in practice.d. Data Scientists are expected to be adept at Machine Learning and create models predicting the performance on the basis of demand.e. And, a big NOD to a combo skill set of statistics, Computer Science and Mathematics.Following screenshot shows the requirements of a topnotch employer from a Data Scientist. The requirements were posted on a jobs’ listing website.Let us do a sneak peek into the same job-listing website and see the skills in demand for a Data Scientist.ExampleRecommendations for a Data ScientistWhat are some general recommendations for Data Scientists in the present scenario? Let us walk you through a few.Exhibit your demonstration skills with data analysis and aim to become learned at Machine Learning.Focus on your communication skills. You would have a tough time in your career if you cannot show what you have and cannot communicate what you know. Experts have recommended reading Made to Stick for far-reaching impact of the ideas that you generate.Gain proficiency in deep learning. You must be familiar with the usage, interest, and popularity of deep learning framework.If you are wearing the hat of a Python expert, you must also have the know-how of common python data science libraries – numpy, pandas, matplotlib, and scikit-learn.ConclusionData Science is all about contributing more data to the technologically advanced world. Make your online presence a worthy one; learn while you earn.Start by browsing through online portals. If you are a professional, make your mark on LinkedIn. Securing a job through LinkedIn is now easier than scouring through job sites.Demonstrate all the skills that you are good at on the social portals you are associated with. Suppose you write an article on LinkedIn, do not refrain from sharing the link to the article on your Facebook account.Most important of all – when faced with a complex situation, understand why and what led to the problem. A deeper understanding of a problem will help you come up with the best model. The more you empathize with a situation, the more will be your success count. And in no time, you can become that extraordinary whiz in Data Science.Wishing you immense success if you happen to choose or have already chosen Data Science as the path for your career.All the best for your career endeavour!
Rated 4.5/5 based on 1 customer reviews
9210
Essential Skills to Become a Data Scientist

The demand for Data Science professionals is now a... Read More

Top Data Science Trends in 2020

Industry experts are of the view that 2020 will be a huge year for data science and AI. The expected growth rate for AI market will be at 118.6 billion by 2025. The focus areas in the overall AI market will include everything from natural language processing to robotic process automation. Since the beginning of digital era, data has been growing at the speed of light! There will only be a surge in this growth. New data will not only generate more innovative use cases but also spearhead a revolution of innovation.About 77% of the devices today have AI incorporated into them. Smart devices, Netflix recommendations, Amazon’s Alexa Google Home have transformed the way we live in the digital age.  The renowned AI-powered virtual nurses “Molly” and “Angel”, have taken healthcare to new heights and robots have already been performing various surgical procedures.Dynamic technologies like data science and AI have some intriguing data science trends to watch out for, in 2020. Check out the top 6 data science trends in 2020 any data science enthusiast should know:1. Advent of Deep LearningSimply put, deep learning is a machine learning technique that trains computers to think and act like humans i.e., by example. Ever since, deep learning models have proven their efficacy by exceeding human limitations and performance. Deep learning models are usually trained using a large set of labelled data and multi-layered neural network architectures.What’s new for Deep Learning in 2020?In 2020, deep learning will be quite significant. Its capacity to foresee and understand human behaviour and how enterprises can utilize this knowledge to stay ahead of their competitors will come in handy.2. Spotlight on Augmented AnalyticsAlso hailed as the future of Business Intelligence, Augmented analytics employs machine learning/artificial intelligence (ML/AI) techniques to automate data preparation, insight discovery and sharing, data science and ML model development, management and deployment. This can be greatly beneficial for companies to improve their offerings and customer experience. The global augmented analytics market size is projected to reach $29,856 million by 2025. Its growth rate is expected to be at a CAGR of 28.4% from 2018 to 2025.What’s New for Augmented Analytics in 2020?This year, augmented analytics platforms will help enterprises leverage social component. The use of interactive dashboards and visualizations in augmented analytics will help stakeholders share important insights and create a crystal-clear narrative that echoes the company’s mission.3. Impact of IoT, ML and AI2020 will see the rise of AI/ML, 5G, cybersecurity and IoT. The rise in automation will create opportunities for new skills to be explored. Upskilling in emerging new technologies will make professionals competent in the dynamic tech space today. As per a survey by IDC, over 75% of organizations will invest in reskilling programs or their workforce to bridge the rising skill gap by 2025.What’s new for IoT, ML and AI in 2020?It has been estimated that over 24 billion devices will be connected to the Internet of Things this year. This means industries can create a world of difference by developing smart devices that make a difference to the way we live.4. Better Mobile Analytics StrategiesMobile analytics deals with measuring and analysing data created across various mobile platform sites and applications alone. It helps businesses keep track of the behaviour of their users on mobile sites and apps. This technology will aid in boosting the cross-channel marketing initiatives of an enterprise, while optimizing the mobile experience and growing user engagement and retention.What’s new for Mobile Analytics in 2020?With the ever-increasing number of mobile phone users globally, there will be a heightened focus on mobile app marketing and app analytics. Currently, mobile advertising ranks first in digital advertising worldwide. This had made mobile analytics quintessential, as businesses today can track in-app traffic, potential security threats, as well as the levels of customer satisfaction.5. Enhanced Levels of CustomizationAccess to real-time data and customer behaviour has made it possible to cater to each customer’s specific needs. As customer expectations soar, companies would have to buckle up to deliver more personalized, relevant and superior customer experience. The use of data and AI will make it possible.What’s new for User Experience in 2020?Interpreting user experience can be done easily using data derived from conversions, pageviews, and other user actions. These insights will help user experience professionals make better decisions while providing the user exactly what he/she needs.6. Better Cybersecurity2019 brought to light the grim reality of data privacy and security breach. With over 24 billion devices estimated to be connected to the internet this year, stringent measures will be deployed by enterprises to protect data privacy and prevent security breach. Industry experts are of the view that a combination of cybersecurity and AI-enabled technology results will lead to effective attack surface coverage that is a 20x more effective attack than traditional methods.What’s new for Cybersecurity in 2020?Since data shows no signs of stopping in its growth, the number of threats will also keep looming on the horizon. In 2020, cybersecurity professionals would have to gear themselves to conjure new and improved ways to secure data. Therefore, AI-supported cybersecurity measures will be deployed to prevent malicious attacks from malware and ensure better cybersecurity.The Way Ahead in 2020Data Science will be one of the fastest-growing technologies in 2020. Its wide range of applications in various industries will pave way for more innovative trends like the ones above.
Rated 4.5/5 based on 0 customer reviews
5867
Top Data Science Trends in 2020

Industry experts are of the view that 2020 will be... Read More