Search

Easier Angular JS Routing With Angular UI Router

Takeaways from this article In this post, we will understand how and why Angular JS Routing is better in comparison to traditional frameworks, and how this routing can be achieved seamlessly with the help of Angular UI Router.  We will understand the usage and importance of states, and how they help in navigating easily through the application. We will see a sample with a state definition and its usage, and how it helps in resolving issues and making the page user-friendly.  IntroductionAngular JS is a popular application framework that is used for front-end development. It works great when creating single page applications. It is written in JavaScript and is being maintained by tech-giant Google. It is used to interpret the HTML attributes in a webpage in the form of directives and bind the respective input and output components of the model, that are represented in the form of JavaScript variables. Angular UI Routing helps navigate between views and pages seamlessly without getting messy, and in a user-friendly manner.  Note: It is assumed that the reader is aware of technologies like HTML, CSS, JavaScript and Angular JS. Native Angular Routing doesn’t have the following facilities: No nested views No named views Data can’t be passed around between two views The syntax of the route should be literally remembered to navigate the page The URL string has to be manually changed via the code base when a URL is changed. When a URL changes, all the links associated with that URL have to be changed and updated to point to the new location.  In order to minimize the above disadvantages, Angular UI Router is used.  Introduction to Angular JS Routing Angular JS Routing is an advanced technique that helps in routing, that has a declarative approach in navigating through the pages of a website. In a webpage, when a link that points to a different page is created, the URL has to be explicitly defined. This path has to be literally memorized and if it is changed, then the URL would have to be changed in all places where it was previously referenced. This is when Angular JS Routing comes into play.  Angular UI Router is a module in Angular JS that helps in the process of routing in Angular JS applications. The Router facilitates in the creation as well as usage of such routes in applications. The Router consists of a method called ‘stateProvider’ which helps in the creation of ‘states’, also known as ‘routes’.  The method ‘stateProvider’ takes the name of the state and the configurations of the state as its parameters. It looks like below:$stateProvider.state(stateName, stateConfig)The above function creates a state in the application. The parameter ‘stateName’ is a string value that is used to create a parent or a child state. To create a child state, the parent state is referenced with the help of the dot operator.  The ‘stateConfig’ parameter is an object that can take many properties, such as: template: It is a string in the form of HTML content or a function that returns a HTML string as the output.  templateUrl: It is a string that is a URL path to a specific template file or a function that returns a URL path string as the output.  templateProvider: It is a function that returns the HTML content as a string output.  Note: The ‘stateConfig’ parameter can take only one property at a time per state or per view with respect to the application.  The states in an application help the user in navigating or reconstituting to the situation that occurs within the application’s working. Note: The latest version of UI Router can be downloaded from the GitHub repository. Below are three examples when ‘stateConfig’ has different properties: With ‘template’ as a property: $stateProvider.state(‘stateName’,   {      url: ‘url for the specific state’,     template: ‘html content’,     controller: "name of controller for specific state" });With ‘templateUrl’ as a property:$stateProvider.state('stateName',   {     url: ‘url for the specific state’,     templateProvider: ‘function name’,     controller: "name of controller for specific state" });Introduction to Angular JS Router It is one of the modules in Angular JS that helps in creating routes for applications that use Angular JS.  Applications are encompassed in a shell, and this shell behaves like a placeholder for the dynamic content that would be generated later.  The UI Router is a routing framework that changes the views of the application based on not just the URL, but the state of the application as well.  It helps in bridging the gap between the traditional frameworks, by allowing nested views, named views, passage of data between two different views, and so on.  Routes are an integral part of Single Page Applications that are preferably created using Angular JS.  With the help of Angular UI Router, these routes can be created with ease and can be used with ease in Angular JS.  The ‘state’ facility gives the flexibility to encapsulate the location of the URL state name, special data for view, generate a view, or even locate a view.  Angular UI Router consists of a ‘stateProvider’ method that can be used to create a route or a state in an application.  The ‘stateProvider’ function takes the name of the state and the configurations of the state as its parameter.  In addition to this, Angular UI Router also helps resolve dependencies before initializing the controller.  It provides utility filters, declarative transitions, state change events, and so on.  What is the difference between State and URL Routes? State Route The views and routes of the Angular JS page will not be bound with just the URL of the site. The specific parts of the site can be changed with the help of UI Routing even if the URL has been kept the same. All the states, routing, and viewing capabilities of the page are handled at a single one-stop place- the ‘.config()’. URL Route URL route is a little different. It is usually implemented using ‘ngroute’, and it needs the implementation of methods such as ‘nginclude’. If the number of methods increases, the situation could get complicated and handling of these methods gets difficult.Stepwise routing with detailed explanation The node.js and npm have to be installed prior to working with the below steps.   For this purpose, the http-server node module has to be run and installed which will help in hosting the demo application that we would create.   It can be installed with the help of the below command: npm install http-server –gA hierarchical directory structure needs to be created. A sample has been shown below:Demo_App --app.js --index.html --nav.htmlThe ‘nav.html’ file needs to have the content as mentioned below: The nav bar title The contents of the file  The two different actions that would be created will also be mentioned here (The home page and the log in.) The index.html file will have the required dependencies along with nav.html file.  The nav.html file will be included in the body tag of the index.html page. The dependencies will be included with the help of ‘CDN’ in the head tag. A separate body tag will be used to define the ui-view div. The ui-view is a section where the information about various routes content and how they would be rendered is present.  The ui-view is a directive that is present inside the ui.router module.  The ui-view directive tells the ‘$state’ about the positioning of the template properties (or other property) It would look like below (sample): <script src="angular.name.js"> </script> <script src = "https://unpkg.com/@uirouter/angularjs@1.0.7/release/angular-ui-router.min.js"> </script>An app.js file can be created, which is an application file that contains the route information and the actions that need to be performed with the help of the controller in the application framework. Below is a sample app.js file:var app = angular.module('Demo_App', [ "ui.router" ]);   app.config(function($stateProvider, $locationProvider,                                   $urlRouterProvider)   {       $stateProvider.state('home_page',   {               url : '/home',               template : "<h1>Home page</h1>",               controller : "hmeCtrl"         })           .state('login_page',   {               url : '/login',               template : "<h1>Login page</h1>",               controller : "lgnCtrl"         })       $urlRouterProvider.otherwise("/home");   });   app.controller('mainCtrl', function() {});   app.controller('hmeCtrl', function() {});   app.controller('lgnCtrl', function() {});The first line of the sample code declares the name of the application module and injects a ui.router module dependency into it. Next, the route configurations are defined inside the app.config and the dependency injections are set up.  The routes for the required attributes are set up with the help of the ‘stateProvider’ function and mentioning the required parameters.  Once the home page and the login have been set up, a condition is set up where the page would be redirected back to the home page if any of the routes don’t match the url name. This is done with the help of the ‘otherwise’ function.   Empty states are created for the controllers. This is done since it is a sample and no operation is being performed inside these states.  The above sample application can be run in the browser after the http-serve node module has been installed (as mentioned in the first step of this explanation).   Next step is to navigate to the ‘routingDemo’ folder and execute the below mentioned command:http-serverThis command helps in hosting the demonstration application on the default port that is 8080. This can be accessed with the help of the below mentioned link:localhost:8080/When the application is accessed via a browser, it will be a web application with two routes, namely: Home Login ConclusionIn this post, we understood the prominence of Angular JS Routing and Router, and how they are different from the traditional approaches. They help bridge a gap between the features that traditional approaches implement, thereby increasing the ability of web pages to work and perform in a better, more user-friendly manner.Note: Angular is a version upgrade to AngularJS. Therefore, Angular refers to AngularJS in this article.
Easier Angular JS Routing With Angular UI Router
Amit
Amit

Amit Diwan

Author

Amit Diwan is an E-Learning Entrepreneur, who has taught more than a million professionals with Text & Video Courses on the following technologies: Data Science, AI, ML, C#, Java, Python, Android, WordPress, Drupal, Magento, Bootstrap 4, etc.

Posts by Amit Diwan

Easier Angular JS Routing With Angular UI Router

Takeaways from this article In this post, we will understand how and why Angular JS Routing is better in comparison to traditional frameworks, and how this routing can be achieved seamlessly with the help of Angular UI Router.  We will understand the usage and importance of states, and how they help in navigating easily through the application. We will see a sample with a state definition and its usage, and how it helps in resolving issues and making the page user-friendly.  IntroductionAngular JS is a popular application framework that is used for front-end development. It works great when creating single page applications. It is written in JavaScript and is being maintained by tech-giant Google. It is used to interpret the HTML attributes in a webpage in the form of directives and bind the respective input and output components of the model, that are represented in the form of JavaScript variables. Angular UI Routing helps navigate between views and pages seamlessly without getting messy, and in a user-friendly manner.  Note: It is assumed that the reader is aware of technologies like HTML, CSS, JavaScript and Angular JS. Native Angular Routing doesn’t have the following facilities: No nested views No named views Data can’t be passed around between two views The syntax of the route should be literally remembered to navigate the page The URL string has to be manually changed via the code base when a URL is changed. When a URL changes, all the links associated with that URL have to be changed and updated to point to the new location.  In order to minimize the above disadvantages, Angular UI Router is used.  Introduction to Angular JS Routing Angular JS Routing is an advanced technique that helps in routing, that has a declarative approach in navigating through the pages of a website. In a webpage, when a link that points to a different page is created, the URL has to be explicitly defined. This path has to be literally memorized and if it is changed, then the URL would have to be changed in all places where it was previously referenced. This is when Angular JS Routing comes into play.  Angular UI Router is a module in Angular JS that helps in the process of routing in Angular JS applications. The Router facilitates in the creation as well as usage of such routes in applications. The Router consists of a method called ‘stateProvider’ which helps in the creation of ‘states’, also known as ‘routes’.  The method ‘stateProvider’ takes the name of the state and the configurations of the state as its parameters. It looks like below:$stateProvider.state(stateName, stateConfig)The above function creates a state in the application. The parameter ‘stateName’ is a string value that is used to create a parent or a child state. To create a child state, the parent state is referenced with the help of the dot operator.  The ‘stateConfig’ parameter is an object that can take many properties, such as: template: It is a string in the form of HTML content or a function that returns a HTML string as the output.  templateUrl: It is a string that is a URL path to a specific template file or a function that returns a URL path string as the output.  templateProvider: It is a function that returns the HTML content as a string output.  Note: The ‘stateConfig’ parameter can take only one property at a time per state or per view with respect to the application.  The states in an application help the user in navigating or reconstituting to the situation that occurs within the application’s working. Note: The latest version of UI Router can be downloaded from the GitHub repository. Below are three examples when ‘stateConfig’ has different properties: With ‘template’ as a property: $stateProvider.state(‘stateName’,   {      url: ‘url for the specific state’,     template: ‘html content’,     controller: "name of controller for specific state" });With ‘templateUrl’ as a property:$stateProvider.state('stateName',   {     url: ‘url for the specific state’,     templateProvider: ‘function name’,     controller: "name of controller for specific state" });Introduction to Angular JS Router It is one of the modules in Angular JS that helps in creating routes for applications that use Angular JS.  Applications are encompassed in a shell, and this shell behaves like a placeholder for the dynamic content that would be generated later.  The UI Router is a routing framework that changes the views of the application based on not just the URL, but the state of the application as well.  It helps in bridging the gap between the traditional frameworks, by allowing nested views, named views, passage of data between two different views, and so on.  Routes are an integral part of Single Page Applications that are preferably created using Angular JS.  With the help of Angular UI Router, these routes can be created with ease and can be used with ease in Angular JS.  The ‘state’ facility gives the flexibility to encapsulate the location of the URL state name, special data for view, generate a view, or even locate a view.  Angular UI Router consists of a ‘stateProvider’ method that can be used to create a route or a state in an application.  The ‘stateProvider’ function takes the name of the state and the configurations of the state as its parameter.  In addition to this, Angular UI Router also helps resolve dependencies before initializing the controller.  It provides utility filters, declarative transitions, state change events, and so on.  What is the difference between State and URL Routes? State Route The views and routes of the Angular JS page will not be bound with just the URL of the site. The specific parts of the site can be changed with the help of UI Routing even if the URL has been kept the same. All the states, routing, and viewing capabilities of the page are handled at a single one-stop place- the ‘.config()’. URL Route URL route is a little different. It is usually implemented using ‘ngroute’, and it needs the implementation of methods such as ‘nginclude’. If the number of methods increases, the situation could get complicated and handling of these methods gets difficult.Stepwise routing with detailed explanation The node.js and npm have to be installed prior to working with the below steps.   For this purpose, the http-server node module has to be run and installed which will help in hosting the demo application that we would create.   It can be installed with the help of the below command: npm install http-server –gA hierarchical directory structure needs to be created. A sample has been shown below:Demo_App --app.js --index.html --nav.htmlThe ‘nav.html’ file needs to have the content as mentioned below: The nav bar title The contents of the file  The two different actions that would be created will also be mentioned here (The home page and the log in.) The index.html file will have the required dependencies along with nav.html file.  The nav.html file will be included in the body tag of the index.html page. The dependencies will be included with the help of ‘CDN’ in the head tag. A separate body tag will be used to define the ui-view div. The ui-view is a section where the information about various routes content and how they would be rendered is present.  The ui-view is a directive that is present inside the ui.router module.  The ui-view directive tells the ‘$state’ about the positioning of the template properties (or other property) It would look like below (sample):  An app.js file can be created, which is an application file that contains the route information and the actions that need to be performed with the help of the controller in the application framework. Below is a sample app.js file:var app = angular.module('Demo_App', [ "ui.router" ]);   app.config(function($stateProvider, $locationProvider,                                   $urlRouterProvider)   {       $stateProvider.state('home_page',   {               url : '/home',               template : "Home page",               controller : "hmeCtrl"         })           .state('login_page',   {               url : '/login',               template : "Login page",               controller : "lgnCtrl"         })       $urlRouterProvider.otherwise("/home");   });   app.controller('mainCtrl', function() {});   app.controller('hmeCtrl', function() {});   app.controller('lgnCtrl', function() {});The first line of the sample code declares the name of the application module and injects a ui.router module dependency into it. Next, the route configurations are defined inside the app.config and the dependency injections are set up.  The routes for the required attributes are set up with the help of the ‘stateProvider’ function and mentioning the required parameters.  Once the home page and the login have been set up, a condition is set up where the page would be redirected back to the home page if any of the routes don’t match the url name. This is done with the help of the ‘otherwise’ function.   Empty states are created for the controllers. This is done since it is a sample and no operation is being performed inside these states.  The above sample application can be run in the browser after the http-serve node module has been installed (as mentioned in the first step of this explanation).   Next step is to navigate to the ‘routingDemo’ folder and execute the below mentioned command:http-serverThis command helps in hosting the demonstration application on the default port that is 8080. This can be accessed with the help of the below mentioned link:localhost:8080/When the application is accessed via a browser, it will be a web application with two routes, namely: Home Login ConclusionIn this post, we understood the prominence of Angular JS Routing and Router, and how they are different from the traditional approaches. They help bridge a gap between the features that traditional approaches implement, thereby increasing the ability of web pages to work and perform in a better, more user-friendly manner.Note: Angular is a version upgrade to AngularJS. Therefore, Angular refers to AngularJS in this article.
6612
Easier Angular JS Routing With Angular UI Router

Takeaways from this article In this post, we will... Read More

Role of Statistics in Data Science

Takeaways from this article In this article, we understand why data is important, and talk about the importance of statistics in data analysis and data science. We also understand some basic statistics concepts and terminologies. We see how statistics and machine learning work in sync to give deep insights into data.  We understand the fundamentals behind Bayesian thinking and how Bayesian theorem works. Introduction Data plays a huge role in today’s tech world. All technologies are data-driven, and humongous amounts of data are produced on a daily basis. A data scientist is a professional who is able to analyse data sources, clean and process the data, understand why and how such data has been generated, take insights from it, and make changes such that they profit the organization. These days, everything revolves around data.  Data Cleaning: It deals with gathering the data and structuring it so that it becomes easy to pass this data as input to any machine learning algorithm. This way, redundant, irrelevant data and noise can also be eliminated.  Data Analysis: This deals with understanding more about the data, why the data has yielded certain results, and what can be done to improve it. It also helps calculate certain numerical values like mean, variance, the distributions, and the probability of a certain prediction.  How the basics of statistics will serve as a foundation to manipulate data in data scienceThe basics of statistics include terminologies, and methods of applying statistics in data science. In order to analyze the data, the important tool is statistics. The concepts involved in statistics help provide insights into the data to perform quantitative analysis on it. In addition to this, as a foundation, the basics and working of linear regression and classification algorithms must also be known to a data science aspirant.  Terminologies associated with statistics Population: It is an entire pool of data from where a statistical sample is extracted. It can be visualized as a complete data set of items that are similar in nature.  Sample: It is a subset of the population, i.e. it is an integral part of the population that has been collected for analysis.  Variable: A value whose characteristics such as quantity can be measured, it can also be addressed as a data point, or a data item.  Distribution: The sample data that is spread over a specific range of values.  Parameter: It is a value that is used to describe the attributes of a complete data set (also known as ‘population’). Example: Average, Percentage  Quantitative analysis: It deals with specific characteristics of data- summarizing some part of data, such as its mean, variance, and so on.  Qualitative analysis: This deals with generic information about the type of data, and how clean or structured it is.  How does analyzing data using statistics help gain deep insights into data? Statistics serve as a foundation while dealing with data and its analysis in data science. There are certain core concepts and basics which need to be thoroughly understood before jumping into advanced algorithms.  Not everyone understand the performance metrics of machine learning algorithms like f-score, recall, precision, accuracy, root mean squared error, and so on. Instead, visual representation of the data and the performance of the algorithm on the data serves as a good metric for the layperson to understand the same.  Also, visual representation helps identify outliers, specific trivial patterns, and certain metric summary such as mean, median, variance, that helps in understanding the middlemost value, and how the outlier affects the rest of the data.  Statistical Data Analysis Statistical data analysis deals with the usage of certain statistical tools that need knowledge of statistics. Software can also help with this, but without understanding why something is happening, it is impossible to get considerable work done in statistics and data science.  Statistics deals with data variables that are either univariate or multivariate. Univariate, as the name suggests deals with single data values, whereas multivariate data deals with the multiple number of values. Discriminant data analysis, factor data analysis can be performed on multivariate data. On the other hand, univariate data analysis, Z-test, F-test can be performed if we are dealing with univariate data.  Data associated with statistics is of many types. Some of them have been discussed below. Categorical data represents characteristics of people, such as marital status, gender, food they like, and so on. It is also known as ‘qualitative data’ or ‘yes/no data’. It takes numerical values like ‘1’, ‘2’, where these numbers indicate one or other type of characteristics. These numbers are not mathematically significant, which means it can’t be associated with each other. Continuous data deals with data that is immeasurable, and can’t be counted, which basically continual forms of values are. Predictions from a linear regression are continuous in nature. It is a continuous distribution that is also known as probability density function. On the other hand, discrete values can be measured, counted, and are discontinuous. Predictions from logistic regression are considered to be discrete in nature. Discrete data is non-continuous, and density concept doesn’t come into the picture here. The distribution is known as probability mass function. The Best way to Learn Statistics for Data Science The best way to learn anything is by implementing it, by working on it, by making mistakes and again learning from it.  It is important to understand the concepts, either by going through standard books or well-known websites, before implementing them.  Before jumping into data science, the core statistics concepts like such as regression, maximum likelihood, distributions, priors, posteriors, conditional probability, Bayesian theorem and basics of machine learning have to be understood clearly. Core statistics concepts Descriptive statistics: As the name suggests, it uses the data to give out more information about every aspect of the data with the help of graphs, plots, or numbers. It organizes the data into a structure, and helps think about the attributes that highlight the important parts of the data. Inferential statistics: It deals with drawing inferences/conclusions on the sample data set which is obtained from the population (entire data set) based on the relationship identified between data points in the data set. It helps in generalizing the relationship to the entire dataset. It is important to remember that the dataset drawn from the population is relevant and represents the population accurately. Regression: The term ‘regression’ which is a part of statistics and machine learning, talks about how data can be fit to a line, and how every point from the straight line gives some insights. In terms of machine learning, it can be understood as tasks that can be solved without explicitly being programmed. They discuss how a line can be fit to a given set of data points, and how it can be further extrapolated for the predictions to be done.  Maximum likelihood: It is a method that helps in finding values of parameters for a specific model. The values of the parameters have to be such that the likelihood of the predictions that occur have to be maximum in comparison to the data values that were actually observed. This means the difference between the actual and predicted value has to be less, thereby reducing the error and increasing the accuracy of the predictions.  Note: This concept is generally used with Logistic regression when we are trying to find the output as 0 or 1, yes or no, wherein the maximum likelihood tells about how likely a data point is near to 0 or 1.  Bayesian thinking Bayesian thinking deals with using probability to model the process of sampling, and being able to quantify the uncertainty associated with the data that would be collected.  This is known as prior probability- which means the level of uncertainty that is associated with the data before it is collected to be analysed.  Posterior probability deals with the uncertainty that occurs after the data has been collected.  Machine learning algorithms are usually focussed on giving the best predictions as output with minimal errors, exact probabilities of specific events occurring and so on. Bayes theorem is a way of calculating the probability of a hypothesis (a situation, which might not have occurred in reality) based on our previous experiences and the knowledge we have gained by it. This is considered as a basic concept that needs to be known.  Bayes theorem can be stated as follows: P(hypo | data) = (P(data | hypo) * P(hypo)) / P(data)In the above equation,   P(hypo | data) is the probability of a hypothesis ‘hypo’ when data ‘data’ is given, which is also known as posterior probability.   P(data | hypo) is the probability of data ‘data’ when the specific hypothesis ‘hypo’ is known to be true.   P(hypo) is the probability of a hypothesis ‘hypo’ being true (irrespective of the data in hand), which is also known as prior probability of ‘hypo’.   P(data) is the probability of the data (irrespective of the hypothesis). The idea here is to get the value of the posterior probability, given other data. The posterior probability for a variety of different hypotheses has to be found out, and the probability that has the highest value is selected. This is known as the maximum probable hypothesis, and is also known as the maximum a posteriori (MAP) hypothesis.MAP(hypo) = max(P(hypo | data))If the value of P(hypo | data) is replaced with the value we saw before, the equation would become:MAP(hypo) = max((P(data | hypo) * P(hypo)) / P(data))P(data) is considered as a normalizing term that helps in determining the probability. This value can be safely ignored when required, since it is a constant value. Naïve Bayes classifier   It is an algorithm that can be used with binary or multi-class classification problems. It is a simple algorithm wherein the probability for every hypothesis is simplified.   This is done in order to make the data more traceable. Instead of calculating value of every attribute like P(data1, data2,..,datan|hypo), we assume that every data point is independent of every other data point in the data set when the respective output is given.   This way, the equation becomes:P(data1 | hypo) * P(data2 |hypo) * … * P(data-n| hypo).This way, the attributes would be independent of each other. This classifier performs quite well even in the real world with real data when the assumption of data points being independent of each other doesn’t hold good.  Once a Naïve Bayes classifier has learnt from the data, it stores a list of probabilities in a data structure. Probabilities such as ‘class probability’ and ‘condition probability’ are stored. Training such a model is quick since the probability of every class and its associated value needs to be determined, and this doesn’t involve any optimization processes or changing of coefficient to give better predictions.   Class probability: It tells about the probability of every class that is present in the training dataset. It can be calculated by finding the frequency of values that belongs to each class divided by the total number of values.  Class probability = (number of classes/(number of classes of group 0 + number of classes of group 1)) Conditional probability: It talks about the conditional probability of every input that is associated with a class value. It can be calculated by finding the frequency of every data attribute in the data for a given class, and this can be determined by the number of data values that have that data label/class value.  Conditional probability P(condition | result ) = number of ((values with that condition and values with that result)/ (number of values with that result)) Not just the concept, once the user understands the way in which a data scientist needs to think, they will be able to focus on getting cleaner data, with better insights that would lead to performing better analysis, which in turn would give great results.  Introduction to Statistical Machine Learning The methods used in statistics are important to train and test the data that is used as input to the machine learning model. Some of these include outlier/anomaly detection, sampling of data, data scaling, variable encoding, dealing with missing values, and so on.  Statistics is also essential to evaluate the model that has been used, i.e. see how well the machine learning model performs on a test dataset, or on data that it has never seen before.  Statistics is essential in selecting the final and appropriate model to deal with that specific data in a predictive modelling situation.  It is also needed to show how well the model has performed, by taking various metrics and showing how the model has fared.  Metrics used in Statistics Most of the data can be fit to a common pattern that is known as Gaussian distribution or normal distribution. It is a bell-shaped curve that can be used to summarize the data with the below mentioned two parameters:  Mean: It is understood as the central most value when the data points are arranged in a descending or ascending order, or the most likely value.Mode: It can be understood as the data point that occurs the greatest number of times, i.e. The frequency of the value in the dataset would be very high.  Median: It is a measure of central tendency of the data set. It is the middle number, that can be found by sorting all the data points in a dataset and picking the middle-most element. If the number of data points in a dataset is odd, one single middle value is picked up, whereas two middle values are picked and their mean is calculated if the number of data points in a dataset is even. Range: It refers to the value that is calculated by finding the difference between the largest and the smallest value in a dataset. Quartile: As the name suggests, quartiles are values that divide the data points in a dataset into quarters. It is calculated by sorting the elements in order and then dividing the dataset into 4 equal parts. Three quartiles are identified: The first quartile that is the 25th percentile, the second quartile which is the 50th percentile and the third quartile that is the 75th percentile. Each of these quartiles tells about the percentage of data that is smaller or larger in comparison to other percentiles of data. Example: 25th percentile suggests that 25 percent of the data set is smaller than the remaining 75 percent of the data set. Quartile helps understand how the data is distributed around the median (which is the 50th percentile/second quartile). There are other distributions as well, and it depends on the type of data we have and the insights we need from that data, but Gaussian is considered as one of the basic distributions. Variance: The average of the difference between every value and the mean of that specific distribution.  Standard deviation: It can be understood as the measure that indicates the dispersion that occurs in the data points of the input data.  ConclusionIn this post, we understood why and how statistics is important to understand and work with data science. We saw a few terminologies of statistics that are essential in understanding the insights which statistics would end up giving to data scientist. We also saw a few basic algorithms that every data scientist needs to know, in order to learn other advanced algorithms.  
5764
Role of Statistics in Data Science

Takeaways from this article In this article, we u... Read More

Why You Should Use Angular JS

Takeaways from this articleIn this post, we will understand the prominence of Angular JS and how it plays an important role in front-end web development.  We will understand the applications of Angular JS, along with its usage.  We will also see how it works in comparison to other frameworks that are used for the same purpose.  What is Angular JS Angular JS is a JavaScript framework, that is currently being maintained by tech giant Google. It is used to build single page applications and web-based mobile applications that have interactivity with the users. It is simply an extension of the DOM model, making the application more user-friendly and robust. It is also an MVC framework implementation.  What are the applications of Angular JS? Menu creation Menu is trivial in almost all websites, be it single page, or multi-page. A menu is a navigation tool that responds to an input sent by a user- this could be a click, or a touch. It includes designs to make the web page more visible, or to make the user-experience better. Angular JS can be used to improve the user friendliness of the application. Elements from CSS can be added to the already present HTML (in the AngularJS framework) to improve the overall view of the page, thereby improving the user experience.  Interactive single page application There are various advantages when using a web page that behaves like a single page application. Instead of having to load and fetch separate pages from the memory as and when requested by the user every time they visit the site, a single page website would mean less overload on the memory and improved quality of user-experience. This is because all the code required by the web application can be retrieved in one go or can be dynamically loaded as and when required by the user. This behaves more like a desktop application, rather than like a multi-page application on the website.  10 advantages of Angular JS with respect to its peers Easy to implement MVC framework Many applications implement MVC frameworks, and Angular JS is one of them. The highlight of using Angular JS is the fact that MVC is implemented by splitting up the web application into various parts of MVC wherein every part will automatically perform its own remaining work. The MVC elements are managed by Angular JS without explicitly specifying so, and this acts as a pipeline that integrates them into separate elements. It is an intermediate point of contact that makes sure that the components fit and integrate properly. This way, the developer can concentrate on building quality applications without worrying about integration.  Interactive user interface Angular JS uses HTML to define the user-interface of the mobile application, and since HTML is simple to use, easy to understand and a declarative language, the entire application is simple. It is easier to use HTML than JavaScript to define the elements of the interface.  HTML is recognizable the moment it is encountered due to its format, tags, etc, in comparison to understanding that the language that has been used to implement the interface is JavaScript. In addition to this, HTML can be used to understand the flow of control of the application (HTML can be used with controllers in the MVC framework as well). Easy to understand directives A directive is a concept that can be used to work with Angular JS with the aim of embedding additional functionality into HTML, without making it more complicated. A limitless number of elements can be added to HTML, to improve the functionality of the application, without making it messy or complicated or difficult to maintain.  Angular JS facilitates directives that help the developer in building customized HTML tags. They can be identified with the help of their prefit ‘ng’. These newly built customized tags can be used as customized widgets. In addition to this, they can be used to manipulate the attributes of the DOM. There are certain default directives as well. The application maps the newly customized/default directives (which are also known as special attributes) to the elements of HTML so that it will generate the respective functionality for the application. This is achieved after creating the directives (customized or default attributes/HTML tags) and attaching that specific attribute/tag to the DOM element where it is required. With the help of data binding, changes to the application will be directly reflected in the ‘view’ section of the application.  Working with DOM is simple In general, the view part of MVC framework extends to the DOM in order to display the data, and at this point in time, the DOM is played with, in order to add extra functionality. The directives present in Angular JS can be used to bind the application data with the customized/default special attributes of HTML DOM elements. This means views are interpreted as HTML pages that come with placeholders which are later replaced with relevant values when dynamic data comes in, by Angular JS. This is a convenient way since it helps integrate the view with the website design. This will eliminate the complexity and messiness of the code.  POJO POJO simply means Plain Old JavaScript Object, which indicates that the Angular JS’s data models are simple, plain, easy to understand as well as implement. This also means that functions such as ‘getters’ and ‘setters’ are not required. The characteristics of the application can be changed directly. The presence of directives and DOM elements can be conveniently used to add the required functionalities.  Writing messy code is avoided The need to write too many lines of messy code can be avoided while using Angular JS. The need to write code for MVC pipeline is eliminated, thereby reducing considerable amount of overhead. The data model can be specified without the need to use other extra features or embedding other features. The concept of data binding makes sure that new data is not added into the view portion manually. It has to be integrated into the view with the help of DOM and directives. This way, the entire procedure is simple and easy.  Directives are different, and they are not a part of the code base of the application. This means a directive can be written in parallel without worrying about integrating it with other parts of the code, since this part will be taken care of by Angular JS itself. The time saved can be productively used to improve the quality and features of the Angular JS application instead of worrying about integrating the code with other parts of the application. As a result, there is more flexibility and the ability to extend the code when trying to build high-end web-based applications.  Filters in the data As the name suggests, it is a method that is used to filter the data before it reaches the view part of the application. It helps add filters such as formatting to decimal places, filtering an array based on a specific parameter, reversing the order of ranges, or implementing the concept of pagination for web pages in the application.  Filters can be created to make sure that the functions work properly and have been separated from the application. It also ensures that the functions are only associated with the data transformations with respect to the data and not the entire code of the application itself.  Communications know the context The Publisher-Subscriber system, which is also known as PubSub system is a common model that is used for a separate communication system. Most of the tools that implement Publisher-Subscriber model over the internet wouldn’t be aware of the context in which they are being used. But it is important to ensure that MVC elements that are not related to the same context are not placed together. When Angular JS is combined with Publisher Subscriber model, the context can be specifically mentioned, thereby addressing a specific part of the application. This ensures that the MVC part remains aloof. The remaining parts of the application won’t receive this context/message which means there won’t be any communication. This is easy to implement and simple to use with respect to communication that happens between controllers. Data binding is a technique that helps in the synchronization of the model and the view of the MVC framework. It helps to change, manipulate and work with the model and the view, and also changes their properties and data. With the help of data binding, any changes in the model would be directly reflected in the ‘view’ portion of the framework without the requirement to manipulate DOM or event handlers.  Service providers The scope of the code can be manipulated with the help of controllers in Angular JS. This means data can be placed prior to the code that would be used to implement the business logic or the server code. Controllers perform most of the heavy lifting, since Angular binds specific services to them. This way, the specific service of one controller wouldn’t get mixed with other parts of the application or with the MVC. An API is created to expose the functionality that needs to be used. The offline data can also be managed by synchronizing it with the server, and using various methods to pull or push data to and from a server. It can also be used to create a data sharing resource service that will ensure that different controllers share the same resources at that particular point in time.  Unit Testing capabilities  Angular JS was built by keeping the testing requirements in mind. This is because testing plays an important role, and can prove to be a significant step in identifying loopholes and issues. It also helps understand the overall application from a different perspective. For this change in perspective, end-to-end testing is performed on the Angular JS application. In Angular JS, unit testing can be performed with or without the help of Karma framework. Unit testing is usually performed separately on controllers and directives of Angular JS. The Angular JS application can also be tested by setting up individual test pages that would help create a specific component of the application, and then adding interactivity to it and checking if it works fine or not.  Dependency Injection (DI) The Angular JS application is bound with the help of Dependency Injection (DI). This is because Angular JS contains a built-in DI system. DI is basically a design pattern that discusses the components and their dependencies. DI helps the developers in creating components, working on their dependencies and facilitating the developers with the components that are required. It ensures that controllers are managed within their respective scopes easily. This is because the controllers depend on DI for data to be passed on to it. Why Angular JS should be used Angular JS is an open source framework that binds the data in two ways- the view and the model of the application are synchronized with each other when a user interaction takes place via the interface. This happens automatically so that the DOM gets updated accordingly. It is considered to be budget-friendly, and easy to create.  The highlight of using Angular JS is the fact that MVC is implemented by splitting up the application into parts of MVC that automatically perform the remaining work respectively. The MVC elements are managed by Angular JS without explicitly specifying, and acts as a pipeline that integrates their separate elements. It is an intermediate point of contact that makes sure that the components fit and integrate properly. Adding new functionalities isn’t a difficult task anymore; the view is not a part of DOM, which means new functions can be added and can be integrated with the application by making sure DOM interacts with the directive and the HTML page.  ConclusionIn this post, we understood the significance of using AngularJS, how it can be applied in the real-world, and how it is advantageous in comparison to other frameworks with respect to its implementation, DOM model, simple coding capabilities, unit testing abilities and much more. Angular JS applications are simple to build, easy to use (user-friendly) as well as maintain. Since the amount of code that needs to be written is less, the developer can concentrate on building better and more interactive applications thereby improving consumer experience.  Note: Angular is a version upgrade to AngularJS. Therefore, Angular refers to AngularJS in this article.
8500
Why You Should Use Angular JS

Takeaways from this articleIn this post, we will u... Read More

Types of Classification in Machine Learning

Takeaways from this article In this post, we understand the concept of classification, regression, classification predictive modelling, and the different types of classification and regression.  We understand why and how classification is important. We also see a few classification algorithms and their implementations in Python.  We understand logistic regression, decision trees, random forests, support vector machines, k nearest neighbour and neural networks. We understand their inner workings and their prominence.Classification refers to the process of classifying the given data set into different classes or groups. The classification algorithm is placed under predictive modelling problem, wherein every class of the dataset is given a label, to indicate that it is different from other classes. Some examples include email classification as spam or not, recognition of a handwritten character as a specific character only, and not another character and so on.   Classification algorithms need data to be trained with many inputs and their respective output, with the help of which the model learns. It is important to understand that the training data must encompass all kinds of data (options) which could be encountered in the test data set or real world. ClassificationThe 4 different prominent types of classification include the following:Binary classification Multi-class classification Multi-label classification Imbalanced classification  Binary classificationAs the name suggests, it deals with the tasks in classification that only have two class labels. Some examples include: email classification as spam or not, whether the price of a stock will go up or go down (ignoring the fact that it could also remain as is), and so on. The value obtained after classifying the data would be either 0 or 1, yes or no, normal or abnormal.  The Bernoulli probability distribution is used as prediction to classify the data as 0 or 1. Bernoulli distribution is a discrete (discontinuous) distribution that gives a binary outcome -- a 0 or a 1. Algorithms that are used to perform binary classification include the following:Logistic regression Decision trees Support vector machine Naïve Bayes ‘k’nn (k nearest neighbors) Code to demonstrate a binary classification task:  from numpy import where  from collections import Counter  from sklearn.datasets import make_blobs  from matplotlib import pyplot  X, y = make_blobs(n_samples=560, centers=2, random_state=1)  print("Data has been generated ")  print("The number of rows and columns are ")  print(X.shape, y.shape)  my_counter = Counter(y)  print(my_counter)  for i in range(10):  print(X[i], y[i])  for my_label, _ in my_counter.items():  row_ix = where(y == my_label)[0]  pyplot.scatter(X[row_ix, 0], X[row_ix, 1], label=str(my_label))  pyplot.legend()  pyplot.show()Output: Data has been generated   The number of rows and columns are   (560, 2) (560,)  Counter({1: 280, 0: 280})  [-9.64384208 -4.14030356] 1  [-0.8821407  4.2877187] 0  … Code explanation The required packages are imported using the ‘import’ function.  The dataset is generated using the ‘make_blobs’ function and by specifying the number of rows and columns that need to be generated.  In addition, the number of classes into which the data points need to be labelled into is also defined. Here, it is 2. The number of rows and columns are displayed along with the summarization of class labelling.  A ‘for’ loop is used to print the first few classified values.  The entire dataset is then plotted on a graph in the form of a scatterplot using the ‘pyplot’ function and displayed on the screen.  Multi-class classificationIt is a type of classification wherein the input data set is classified/labelled into more than 2 classes. Some examples of multi-class classification include:Animal species classification Facial recognition/classification Text translation (special type of multi-class classification task) This is different from binary classification in that it doesn’t have just two classes like 0 or 1, but more, and they need not be 0 or 1. They could be names or other continuous or discontinuous numbers. The data points are classified into one among many different classes given.  The number of class labels may be too high, when trying to classify a given photo into that of a specific person. Text translation also deals with a similar issue, wherein the word placement may vary widely and there maybe thousands of combinations of the same number of words. Multinoulli probability distribution is a discrete/discontinuous probability distribution, where the output could be any value within a given range. Algorithms that are used for binary classification can also be used for multi-class classification.  Code to demonstrate the multi-class classification: from numpy import where  from collections import Counter  from sklearn.datasets import make_blobs  from matplotlib import pyplot    X, y = make_blobs(n_samples=670, centers=5, random_state=1)  print("The dataset has been generated")  print("The rows and columns are ")  print(X.shape, y.shape)  my_counter = Counter(y)  print(my_counter)  for i in range(10):  print(X[i], y[i])  for my_label, _ in my_counter.items():  row_ix = where(y == my_label)[0]  pyplot.scatter(X[row_ix, 0], X[row_ix, 1], label=str(my_label))  pyplot.legend()  pyplot.show() Output:  The dataset has been generated  The rows and columns are   (670, 2) (670,)  Counter({3: 134, 0: 134, 2: 134, 4: 134, 1: 134})  [-6.45785776 -3.30981436] 3  [-6.44623696 -2.90184841] 3  [-5.60217602 -0.65990849] 3 Code explanation: The required packages are imported using the ‘import’ function.  The dataset is generated using the ‘make_blobs’ function and by specifying the number of rows and columns that need to be generated.  In addition, the number of classes into which the data points need to be labelled into is also defined. Here, it is 5.  The number of rows and columns are displayed along with the summarization of class labelling.  A ‘for’ loop is used to print the first few classified values.  The entire dataset is then plotted on a graph in the form of a scatterplot using the ‘pyplot’ function and displayed on the screen. Multi-label classification   Multi-label classification refers to those classification problems that deal with more than one class being assigned to a single data point, i.e. every data point would belong or be labelled into more than one class/label. A simple example would be a photo that contains multiple people, not just one. This means one photo might be classified or labelled as more than one (in fact thousands) of persons. This is different from binary and multi-class classification, since the number of labels into which one data point is classified remains same, i.e one.Some multi-label classification algorithms include: Multi-label random forests Multi-label gradient boosting Code to demonstrate multi-label classification: from sklearn.datasets import make_multilabel_classification  X, y = make_multilabel_classification(n_samples=800, n_features=2, n_classes=5, n_labels=3, random_state=1)  print("The number of rows and columns are ")  print(X.shape, y.shape)  for i in range(8):  print(X[i], y[i]) Output: The number of rows and columns are   (800, 2) (800, 5)  [22. 24.] [1 0 0 1 1]  [12. 35.] [0 1 0 1 0]  [27. 30.] [1 1 0 0 1]  ..  Code explanation The required packages are imported using the ‘import’ function.  The dataset is generated using the ‘make_multilabel_classification’ function present in the scikit-learn package is used.  It is done by specifying the number of rows and columns that need to be generated.  The number of rows and columns are displayed along with the summarization of class labelling.  A ‘for’ loop is used to print the first few classified values.  The entire dataset is then plotted on a graph in the form of a scatterplot using the ‘pyplot’ function and displayed on the screen.  Imbalanced classification This is a type of classification wherein the number of data points of the dataset in every class is not distributed equally. This means imbalanced classification is basically a binary classification problem, which doesn’t have a uniform distribution of points, one class could contains an extremely large amount of data points, and the other class might contains a very small number of data points.  Examples of imbalanced classification problem include: Fraud detection in credit cards Anomaly detection in the given dataset There are specialized algorithms that are used to classify this data into the large data point group or small data point group. Some algorithms have been listed below: Cost sensitive decision trees Cost sensitive logistic regression Cost sensitive support vector machines Code to demonstrate imbalanced binary classification #An example of imbalanced binary classification task  from numpy import where  from collections import Counter  from sklearn.datasets import make_classification  from matplotlib import pyplot  #The dataset is defined  X, y = make_classification(n_samples=800, n_features=2, n_informative=2, n_redundant=0, n_classes=2, n_clusters_per_class=1, weights=[0.99,0.01], random_state=1)  #The shape of the dataset is summarized  print("The number of rows and columns ")  print(X.shape, y.shape)  #The labelled data is summarized  my_counter = Counter(y)  print(my_counter)  #A few data points are summarized  for i in range(10):  print(X[i], y[i])  #The dataset is plotted on a graph and displayed  for my_label, _ in my_counter.items():  row_ix = where(y == my_label)[0]  pyplot.scatter(X[row_ix, 0], X[row_ix, 1], label=str(my_label))  pyplot.legend()  pyplot.show() Output: The number of rows and columns   (800, 2) (800,)  Counter({0: 785, 1: 15})  [0.28622882 0.38305399] 0  [1.17971415 0.48003249] 0  [1.32658794 0.71712275] 0  Code explanation The required packages are imported using the ‘import’ function.  The dataset is generated using the ‘make_classification’ function present in the scikit-learn package is used.  It is done by specifying the number of rows and columns that need to be generated.  The number of rows and columns are displayed along with the summarization of class labelling.  A ‘for’ loop is used to print the first few classified values.  The entire dataset is then plotted on a graph in the form of a scatterplot using the ‘pyplot’ function and displayed on the screen.  Logistic regression In this classification technique, instead of finding continuous values like that of linear regression, we are concerned with finding discrete values. It is simply a classification technique that classifies the given data points into one of the labelled classes. Usually, we are looking at a Boolean output, wherein the result is either 0 or 1, yes or no and so on. Some examples include: Classifying an email as spam or not Finding whether it would rain today or not Naïve Bayes classification Bayes theorem is way of calculating the probability of a hypothesis (situation, which might not have occurred in reality) based on our previous experiences and the knowledge we have gained by it.  Bayes theorem is stated as follows: P(hypo | data) = (P(data | hypo) * P(hypo)) / P(data)  In the above equation,  P(hypo | data) is the probability of a hypothesis ‘hypo’ when data ‘data’ is given, which is also known as posterior probability.  P(data | hypo) is the probability of data ‘data’ when the specific hypothesis ‘hypo’ is known to be true.  P(hypo) is the probability of a hypothesis ‘hypo’ being true (irrespective of the data in hand), which is also known as prior probability of ‘hypo’.  P(data) is the probability of the data (irrespective of the hypothesis). The idea here is to get the value of the posterior probability, given other data. The posterior probability for a variety of different hypotheses is found out, and the probability that has the highest value is selected. This is known as the maximum probable hypothesis, and is also known as maximum a posteriori (MAP) hypothesis.  MAP(hypo) = max(P(hypo | data))  If the value of P(hypo | data) is replaced with the value we saw before, the equation would become:  MAP(hypo) = max((P(data | hypo) * P(hypo)) / P(data))  P(data) is considered as a normalizing term that helps in determining the probability. This value can be ignored when required, since it is a constant value. Naïve Bayes classifier is an algorithm that can be used with binary or multi-class classification problems. Once a Naïve Bayes classifier has learnt from the data, it stores a list of probabilities. Probabilities such as ‘class probability’ and ‘condition probability’ is stored. Training such a model is quick since the probability of every class and its associated value needs to be determined, and this doesn’t involve any optimization processes or coefficient changing.  K-nearest neighbour (KNN)  The simplest way to understand k-nearest neighbour, is that the training data for the algorithm is all the data in its entirety. KNN doesn’t have a different model, other than the one that stores the entire dataset, which means there is no machine learning that is actually happening. This means KNN makes predictions and extracts patterns directly from the training dataset itself. When a new data point is encountered, the corresponding value for that can be found using KNN by navigating through the entire training dataset, by looking at the ‘k’ number of very similar neighbours. Once the ‘k’ neighbours have been identified, they are summarized and the output for every instance is found. In case of regression, the mean of this output is the result, and in case of classification, the mode of this output is the result.  How to determine the ‘k’ neighbours? To find ‘k’ number of instances from the training dataset that are very similar to the new data point, we use a distance factor, and the most popular metric is the Euclidean distance.  Euclidean distance can be determined by finding the square root of the sum of the square of difference between the new point and an existing point in the data set, and this sum is from values in the range (a,b). Euclidean Distance: (a,b) = square root( sum( a – b) ^ 2))  Other distances that can be used include: Hamming distance Manhattan distance Minkowski Distance When the number of data points in the training set increases, the complexity of KNN also increases.  Support vector machines (SVM) The hyperplane present in linear SVM is learnt by performing simple transformations using linear algebra. The sum of the product of every pair of input data points is multiplied, and this is known as the inner product. The basic idea behind SVM is that the inner product of two vectors can be expressed as a sum of product of the first value of every vector.  To find inner product of two input vectors: [a,b] and [c,d], we do [a*c + b*d]  In order to predict new value, the dot product can be used, and the support vector can be calculated using the below equation: f(x) = coeff-1 + sum(coeff-2 * (a,b))   Here, ‘a’ and ‘b’ are input vectors and coeff-1 and coeff-2 are coefficients that are determined with the help of the training dataset and the learning algorithm. Stochastic gradient descent or sequential minimal optimization technique can be used. All these optimization techniques break down the main problem into sub-problems and every sub problem is solved by calculating the required value.  Decision trees It is a part of predictive modelling in machine learning that is considered as one of the most powerful algorithms. It is also known as CART, i.e. classification and regression trees since this can be used in the process of classification as well as regression tasks. Decision tree can be simply visualized as a binary tree that has a root and many branches from it and leaves. It is the same as the tree data structure. The root is a single input value, and the branches that lead to leaves are used in predicting the values for the given input.  The tree structure can be stored in the form of a graph structure or a set of rules. Once the data in the form of tree is available, it is simple to make predictions on it with the help of the leaf nodes. The specific branch and its leaf node is examined to reach the node.  Data is filtered from the root of the tree and goes and sits in the branch and the leaf that is relevant to it.  No data preparation or pre-processing is required while working with CART or decision trees.  Gradient boosting It is a method to build predictive models in machine learning. The idea behind boosting is to understand whether a weak learning algorithm can be made to learn better. This involves three attributes: A weak learning algorithm that makes prediction: Decision tree is considered to be a weak learner when it comes to gradient boosting. The best splits are chosen in decision trees, thereby minimizing the loss, hence they need to be improved so that they work well even when the split is random.  A loss function that needs to be optimized: This value depends on the situation in hand. Many different loss functions can be used, such as squared error, measure squared error, logarithmic loss function and so on. A new boosting algorithm won’t have to be figured out for every loss function.  An additive model that adds weak learner to minimize the loss function: The trees to the gradient boosting technique are added one at a time, so that the existing model trees don’t have changes. This way, the loss is minimized when new trees are added. Usually, gradient descent optimization technique is used to minimize the loss.  Random forest Random forest is an ensemble machine leaning algorithm that uses bootstrap aggregation or bagging. It is a statistical method that helps in estimating the quantity from a given data sample. It is done to reduce the variance for those algorithms that seem to have a high variance. Examples of algorithms that have high variance include CART, and decision trees. Decision trees are extremely sensitive to the data on which they are trained. If the training data changes, the resultant tree would also be completely different. A small change in the input makes a huge difference to the overall training and output.  An ensemble method is the one that combines the predictions that have come from many different machine learning algorithms, thereby making sure that the predictions are more accurate in comparison to dealing with an algorithm that gives a single prediction. It is like combining the best algorithms to give the best of best values.  Random forest makes sure that the every sub-tree that learns and trains on the data and makes the predictions is less correlated to the other sub-trees that do the same. The learning algorithm is limited to be able to look at a random sample of the data points, so that it doesn’t have the opportunity to look through all the variables, and select an optimal point to split upon (which is actually the case with CART). It is seen that for classification trees, a good value for the number of randomly selected columns from the dataset is square root (p) where p refers to the number of input variables. On the other hand, for regression trees, a good value for the number of randomly selected columns from the dataset is p/3.  Neural networks It is a part of deep learning that deals with artificial neural networks. In general, the word ‘neural’ or ‘neuro’ deals with the decision making branch of the human brain. The idea behind artificial neural network, also abbreviated as ANN, is that it takes decision similar to how the neurons in the brain function while performing a function or taking a decision.  It is called deep learning since these networks have various layers, and every layer has a large number of nodes. Every layer processes some part of the data and passes on the computed data to the next layer. The input data to one layer is the output data of the previous layer. Usually, the input layer’s nodes are large in number, and the output layer has just one node indicating that the data was processed, and the output has been obtained.  ConclusionIn this post, we understood how classification works, the different types of classification and regression, their working, implementations by generating simple dataset and working through it using Python and other relevant machine learning related packages. 
8308
Types of Classification in Machine Learning

Takeaways from this article In this post, we und... Read More

The K-Fold Cross Validation in Machine Learning

Takeaways from the article This article will cover one of the most important concepts - the ‘k’ fold cross validation in Machine Learning. This article discusses how cross validation works, and why it is important, and how ‘underfitting’ or ‘overfitting’ or 'just the right fit’ affects the output data. Along with overfitting, we will discuss what is Hyperparameter, and how is a hyperparameter for a given model decided and how we can check the value of a hyperparameter. This article also covers applications of ‘k’ fold cross validations, how to choose the value of ‘k’ and its applications. The concept of variations in cross-validation is also covered. We will also discuss the implementation in Python along with code explanation.Machine learning algorithms, apart from many uses, are also used to extract patterns from data or predict certain continuous or discrete values. It is also important to understand that the model that is built with respect to the data is just right - it doesn’t overfit or underfit. ‘Overfitting’ and ‘underfitting’ are two concepts in Machine Learning that deal with how well the data has been trained and how accurately the data has been predicted. The Over fitting includes a value Hyperparameter, to see how the algorithm behaves.Underfitting in Machine LearningGiven a dataset, and an appropriate algorithm to train the dataset with, if the model fits the dataset rightly, it will be able to give accurate predictions on never-seen-before data.  On the other hand, if the Machine Learning model hasn’t been trained properly on the given data due to various reasons, the model will not be able to make accurate or nearly good predictions on the data. This is because the model would have failed to capture the essential patterns from the data.  If a model that is being trained is stopped prematurely, it can lead to underfitting. This means data won’t be trained for the right amount of time, due to which it won’t be able to perform well with the new data. This would lead to the model not being able to give good results, and they could not be relied upon.  The dashed line in blue is the model that underfits the data. The black parabola is the line of data points that fits the model well.  Overfitting in Machine Learning This is just the opposite of underfitting. It means that instead of extracting the patterns or learning the data just right, it learns too much. This means all the data is basically captured, including noise (irrelevant data, that wouldn’t contribute to the prediction of output when new data is encountered) thereby leading to not being able to generalize the model to new data.The model, during training, performs well, and learns all data points, literally memorizing the data that it has been given. But when it is in the testing phase or a new data point is introduced to it, it fails miserably. The new data point will not be captured by an overfit machine learning model.   Note: In general, more the data, better the training, leading to better prediction results. But it should also be made sure that the model is not just capturing all points, instead it is learning, thereby removing the noise present in the data.   Before exposing the model to the real world, the training data is divided into two parts. One is called the ‘training set’ and the other is known as the ‘test set’. Once the training is completed on the training dataset, the test set is exposed to the model to see how it behaves with newly encountered data. This gives a sufficient idea about how accurately the model can work with new data, and its accuracy.   HyperparameterHyperparameters are values that are used in Machine Learning, set by the user by performing a few trails, to see how the algorithm behaves. Some examples include ridge regression and gradient descent. How is a hyperparameter for a given model decided? Hyperparameters depend on various factors like the algorithm in hand, the data provided, and so on. The optimal value of hyperparameter can be found only through trial and error method. This method is known as hyperparameter tuning.  To check the value of the hyperparameter, and to tune the hyperparameter, the test set (the data set which is used to see how the model works on new data) is constantly used, thereby making the model develop an affinity to the test data set. When this happens, the test set almost becomes the training set, and the test data set can’t be used to see how well the model generalizes to new data.  To overcome this situation, the original dataset is split into 3 different sets- ‘training dataset’, ‘validation dataset’, and ‘test dataset’.  Training dataset: This is used as a parameter to the given machine learning algorithm, to be trained upon.  Validation dataset: This dataset is used to evaluate the model, i.e hyperparameter tuning. Later the result is checked, and if the results are not appropriate, the hyperparameter value can be changed and it can be tested on the validation set. This way, the model would not be exposed to the test set, thereby preserving the sanctity of the model’s ability.  Testing dataset: This dataset is used to see how the model performs on new data.  Disadvantages of randomly dividing the dataset into three different parts: Some parts of the dataset may have a large number of a specific type of data.  This way, during training, essential patterns may be missed out.  The number of samples in the training set will reduce since the data will be divided into three parts.  The solution to the above issues is to use cross-validation.  Cross-validation It is a process in which the original dataset is divided is divided into two parts only- the ‘training dataset’ and the ‘testing dataset’.       The need of a ‘validation dataset’ is eliminated when cross-validation comes into the picture.  There are many variations of the ‘cross-validation’ method, and the most commonly used one is known as ‘k’ fold cross-validation.  Steps in ‘k’ fold cross-validation In this method, the training dataset will be split into multiple ‘k’ smaller parts/sets. Hence the name ‘k’-fold.  The current training dataset would now be divided into ‘k’ parts, out of which one dataset is left out and the remaining ‘k-1’ datasets are used to train the model.  This is done multiple number of times. The number of times that it has to be done is mentioned by the user in the code.  The one that was kept out of the training is used as a ‘validation dataset’. This can be used to tune hyperparameters and see how the model performs and change the values accordingly, to yield better results.  Even though the size of the dataset isn’t reduced considerably, it was reduced to a certain extent. This method also makes sure that the model remains robust and generalizes well on the data.  Steps in ‘k’ fold cross-validation The above image can be used as a representation of cross validation. Once the part of the training set is checked to find the best hyperparameter, and the best hyperparameter/s are found, this new data is again sent to the model to be retrained. The model will also have the knowledge of the old training data, and along with it, it may give better results, and it can be tested by seeing how new data performs on the testing set of this model.     How is the value of ‘k’ decided?   This depends on the data in hand. It is a trial and error method in which the value is chosen. Usually it is taken as 10 which is completely arbitrary. A large value for ‘k’ indicates less bias, and high variance. Also, this means more data samples can be used to give a better, and precise outcomes.Code for ‘k’ fold cross-validation  Data required to understand ‘k’ fold cross validation can be  taken/copied from the    below location:   This data can be pasted into a CSV file and the below code can be executed. Make sure to give heading to all the columns.     from sklearn.model_selection import KFold  from sklearn.preprocessing import MinMaxScaler  from sklearn.svm import SVR  from sklearn.model_selection import cross_val_predict  from sklearn.model_selection import cross_val_score    import numpy as np  import pandas as pd    dataset = pd.read_csv("path-to.csvfile in your workstation")  X = dataset.iloc[:, [0, 12]]  y = dataset.iloc[:, 13]  scaler = MinMaxScaler(feature_range=(0, 1))  X = scaler.fit_transform(X)  my_scores = []  best_svr = SVR(kernel='rbf')  cv = KFold(n_splits=10, random_state=42, shuffle=False)    for train_index, test_index in cv.split(X):      print("Training data index: ", train_index, "\n")      print("Test data index: ", test_index)        X_train, X_test, y_train, y_test = X[train_index], X[test_index], y[train_index], y[test_index]      best_svr.fit(X_train, y_train)      my_scores.append(best_svr.score(X_test, y_test))      best_svr.fit(X_train, y_train)      my_scores.append(best_svr.score(X_test, y_test))  print("The mean value is" )  print(np.mean(my_scores))  #or  cross_val_score(best_svr, X, y, cv=10)  #(or)  cross_val_predict(best_svr, X, y, cv=10)Output: Training data index:  [  0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17    18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35    36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53    54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71    72  73  74  75  76  77  78  79  80  81  82  83  84 170 171 172 173 174   175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192   193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210   211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228   229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246   247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264   265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282   283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300   301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318   319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336   337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354   355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372   373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390   391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408   409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426   427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444   445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462   463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480   481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498   499 500 501 502 503 504 505]     Test data index:  [ 85  86  87  88  89  90  91  92  93  94  95  96  97  98  99 100 101 102   103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120   121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138   139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156   157 158 159 160 161 162 163 164 165 166 167 168 169]    The mean value is  0.28180923811255787 Note:This is just one split, and this output is repeated as many number of times as we have mentioned in the ‘n_splits’. The ‘KFold’ function returns the index of the data points. Hence, if a user wishes to see the values placed in those indices, they have to be appropriately accessed.  Instead of using ‘mean’ to find r-squared value, the ‘cross_val_predict’ or ‘cross_val_score’ present in the ‘scikit-learn’ package (model_selection) can also be used. The ‘cross_val_predict’ will give the predictions from the dataset when every split is made during the training. On the other hand, the ‘cross_val_score’ gives the r-squared value using cross-validation.   Code explanation The packages that are necessary to work with ‘k’ fold cross validation are imported using the ‘import’ keyword.  The data in the form of a csv file needs to be brought into the Python environment.  Hence, the ‘read_csv’ function, present in the ‘pandas’ package is used to read the CSV file and convert it into a dataframe (pandas data structure).  Then, certain columns are assigned to variables ‘X’ and ‘y’ respectively, and the MinMaxScaler function is used to fit the data to the model and apply certain transformations on it.  An empty list is created, and the training data is cross-validated 10 time, that is specified by the value ‘n_splits in the KFold’ function.  Using this method, the training and testing is done by splitting up the training dataset 10 times. After this, the indices of the training and test dataset is printed on the screen.  The first row gives the r squared value. This value helps understand how closely the data has been fit to the line.  The mean of this r squared value is printed in the end.   Instead of using ‘mean’ function, other functions like ‘cross_val_score’ or ‘cross_val_predict’ also can be used.  Using ‘cross_val_predict’ Here, we are importing the cross_val_predict present in scikit-learn package:  from sklearn.model_selection import cross_val_predict  print(“The cross validation prediction is “)  cross_val_predict(best_svr, X, y, cv=10) Output:  The cross validation prediction is array([25.36718928, 23.06977613, 25.868393  , 26.4326278 , 25.17432617,         25.24206729, 21.18313164, 17.3573978 , 12.07022251, 18.5012095 ,         16.63900232, 20.69694063, 19.29837052, 23.51331985, 22.37909146,         23.39547762, 24.4107798 , 19.83066293, 21.54450501, 21.78701492,         16.23568134, 20.3075875 , 17.49385015, 16.87740936, 18.90126376, …]) The above output displays the cross validation prediction in an array. Code explanation The ‘cross_val_predict’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation value.  Using ‘cross_val_score’  Here, we are importing the cross_val_score present in scikit-learn package:from sklearn.model_selection import cross_val_score  print(“The cross validation score is “)  cross_val_predict(best_svr, X, y, cv=10)Output:The cross validation score is ([ ….. 21.8481575 , 19.10423341, 21.23362906, 17.88772136, 14.76265616,         12.19848284, 17.88647891, 20.5752906 , 21.34495122, 20.42084675,         18.14197483, 16.19999662, 20.12750527, 20.81205896, 19.56085546,         19.99908337, 22.70619515, 23.04620323, 24.98168712, 24.51166359,         23.7297288 ])Code explanationThe ‘cross_val_score’ function present in scikit-learn package is imported, and the previously generated data is considered, and this function can be called on that same data to see the cross- validation score. Variations of cross-validation  There are variations to cross validations, and they are used in relevant situation. The most commonly used one is the ‘k’-fold cross-validation. Others have been listed below:    Leave one out cross validation Leave ‘p’ out cross validation  ‘k’ fold cross validation Holdout method ConclusionHence, in this post, we saw how ‘k’ fold cross validation eliminates the need to procure a validation dataset and how a part of the training dataset itself can be used as a validation set, thereby not affecting the separate testing dataset. We also saw the concepts of underfitting and overfitting, and how important it is for the model to fit just-right, with the concept of “Hyperparameter” as well. We saw how the ‘k’ fold cross-validation is implemented in Python using scikit-learn and how it affects the performance of the model.  
7829
The K-Fold Cross Validation in Machine Learning

Takeaways from the article This article will cove... Read More

Getting Started With Machine Learning With Python: Step by Step Guide

Takeaways from the article This article helps you understand the cases wherein Machine learning can be used, and where it is relevant (and where it is not). It discusses the basic steps involved in a machine learning problem, along with code in Python. It discusses how the data involved in a Machine Learning problem can be visualized using certain Python packages.Machine Learning has remained a hot topic since many years. Many know how to make sense of it, and where it can actually be used. It is not a universal solution to all the challenging problems out there (that are difficult to be solved) in the universe. It can only be used when certain conditions are satisfied. Only then does a problem qualify to be solved using a Machine Learning algorithm. In general, Python is the most preferred language to work with algorithms that involve Machine Learning.  Introduction to Machine Learning Machine Learning, also known as ML in short, is a sub-topic that falls under Artificial Intelligence (AI), to achieve specific goals. ML is the art of understanding or designing an algorithm that can be used to process large or small amounts of data. This algorithm will not explicitly define or set the rules for the machine to learn from the data. The machine learns from the data on its own. There are no ‘if’ or ‘else’ statements to guide the machine.    This is very much similar to how humans learn from their experiences in day-to-day life, how a child learns to ride a bike, how a child learns to read letters, then words, then sentences, and conversations.  Getting started with Machine learning in Python Python has been used to implement machine learning algorithms, since it is open-source, extremely popular and has gained immense support from the community as well. In addition to this, there are loads of packages in Python, and they support usage of machine learning algorithms for a variety of version of Python application.  These algorithms can be implemented in python by calling simple functions and these functions are placed inside classes. In turn, these classes are encapsulated in a module as a package.  The ‘scikit-learn’ package for Python is one of the most popular and has most of the machine learning algorithms pre-implemented, and housed inside packages. To implement an algorithm, the package can be imported (or a specific class from the package can be imported) and it can be bound with the variable or the class object using a dot operator and accessed. In general, to begin implementing any machine learning algorithm, the following steps can serve as a blue-print: Define your problem, and confirm that it can be solved using machine learning (so that it is not a trivial “set of rules” related problem) Prepare the data: In this step, the data needed for this model is collected from various resources. Another way is to generate data using the innumerable functions that are present in Python. In either case, the data has to be cleaned, structured, analysed, and the outliers have to be identified. Also, the data has to be pre-processed so that it is easy for the algorithm to build a model based on the data. Certain irrelevant columns maybe removed, and missing data should be handled.  The data needs to be trained and hyperparameters need to be tuned so as to get better prediction accuracy.  Note: It is understood that the users have Python 3.5 or a higher stable version installed on their workstations before beginning to execute the code in the upcoming sections. Other packages can be installed as and when required.  Where Machine Learning can be used?The simplest place is when there is no prediction or complex data insight needed, it need not be used.  Machine Learning algorithm are built by humans to help understand data better, make predictions etc. When we try to solve a problem, there are certain principles that we hold as a foundation (when dealing with physics- gravity, newton’s law) but algorithms don’t. They are stochastic (random) in nature.  Not all problems that have a large amount of data is suited to work with Machine Learning algorithms. It is important to understand the deterministic nature of problems, and try to avoid solving such problems using Machine Learning.  Machine Learning in PythonLet us jump into a simple problem of linear regression using Machine learning, Linear regression is a simple algorithm that predicts the value of a variable, based on certain other values. There are many variations to Linear Regression that includes Multi-variate regression, etc.   Before jumping into the algorithm, let us understand what linear regression means. ‘Linear’ basically means a straight line, and ‘regression’ which is a part of machine learning, talks about how tasks can be solved without explicitly being programmed.   There are various machine learning algorithms, and Linear Regression is just the beginning to it. This includes supervised learning, unsupervised learning, semi-supervised learning and reinforcement learning.Why should Machine Learning be used? Certain task needs intricate detailing, and patterns might not be fully unveiled if manual or simple methods are used to extract patterns. Machine learning, on the other hand, will be able to extract all important, hidden patterns, and work well even when the amount of data increases exponentially. It also becomes easy to improve pattern recognition. It will also be possible to deliver results in a time manner, get deeper and better insights into the data in hand.   The results computed using a Machine Learning algorithm would be more accurate in comparison to traditional methods, and the models build can serve as a foundation for other data as well. There are different classifications in machine learning, depending on various types. The 4 basic classifications are:Supervised learning algorithms Semi-supervised learning algorithms Unsupervised learning algorithms Reinforcement learning algorithmsMachine learning algorithms can also be classified based on how they learn- on the fly or incrementally, into 2 types:Online learning Batch learningMachine learning algorithms can also be classified based on how they detect patterns- whether they detect patterns in data or compare new data values with previously seen data values:Model-based learning Instance-based learning Supervised LearningMost popular Easy to understand Easier to implement Gives decent results Expensive, since human intervention is requiredSupervised learning involves human supervision. In real-time, supervision is present in the form of labelled features, feedback loop to the data (insights on whether the machine predicted correctly, and if not, what the correct prediction has to be) and so on. Once the algorithm is trained on such data, it can predict good outputs with a high accuracy for never-before-seen inputs. Applications of supervised learning:Spam classification: Classifying emails as spam or important. Face recognition: Detecting faces, mapping them to a specific face in a database of faces. Supervised algorithms can further be classified into two types:Classification algorithms: They classify the given data into one of the given classes or group of data. This basically deals with data grouping/data mapping into specific classes.   Regression algorithms: This deals with fitting the data to a given model, predicting continuous or discrete values.   Semi-supervised LearningIn between the supervised and unsupervised learning algorithms.   Created to bridge the gap between dealing with fully structured and fully unstructured data.   Comes between supervised and unsupervised algorithms.   Input is a combination of unlabelled (more) and labelled (less) data.Applications of semi-supervised learning algorithms:Speech analysis, sentiment analysis Content classificationUnsupervised LearningNo data labelling No human intervention May not be very accurate Can’t be applied to a broad variety of situations Algorithm has to figure out how and what to learn from the data Similar to real-world unstructured data Can’t be applied to a broad variety of situationsApplications of unsupervised learning:Clustering Anomaly detectionUnsupervised data can be classified into two categories:Clustering algorithms Association algorithmsReinforcement LearningIt is a ‘punish and reward’ mechanism. Learns from surrounding and experience. An agent decides the next relevant step to arrive at the desired result.   If algorithm learns correctly, then it is rewarded indicating that it is on the right path. If the algorithm made a mistake, it is punished to indicate the mistake and to learn from it.Supervised learning algorithm is different from reinforcement, since the former has a comparable value, whereas the latter has to decide the next action and take it and bear the result and learn from it.Applications of reinforcement learning:Robotics in automation   Machine learning and data processingOther types of learning algorithmsOnline learning Batch learning: It has two different categories: Model-based learning, and instance-based learningOnline LearningAlso known as incremental/out of the core learning. Assumption is that the learning environment changes constantly.Machine learning models that are trained consistently and constantly on new data to predict output. On the other hand, during this period, the model is getting trained on new data in real time. Whenever the model sees a new example, it quickly has to learn from it and adapt to it. This way, even the newly learnt example will be a part of the trained model, and will be a part of giving the prediction/output.Batch LearningThis is also known as data learning in a group.  Data is grouped/classified into different batches.  There batches are used to extract different patterns since every batch would be considerably different from the other one. These patterns are learned by the model in time.  Model-based learningThe specifications associated with a problem in a domain is converted into a model-format. When this model sees new data, it detects patterns from it, and these patterns are used to make predictions on the newly seen data.   Instance-based learningIt is the simplest form of clustering and regression algorithms.They either result in grouping the algorithm into different classes (due to classification) or give continuous or discrete values as output (due to linear or logistic regression).Classification and regression is based on how similar or different the queries are, with respect to the values in the data.Linear RegressionIn this algorithm, we will understand the problems with two different variables in hand- one is an independent variable, and the other one- a dependant variable. We will take a basic problem of finding prices of a house when its area is given. Assume that we have the below dataset:Price of house (independent value)Area of the house (dependant value)356500 sq m5781000 sq m8901500 sq m13002000 sq m18002500 sq m?3000 sq mWhen the above data is given, and the price of house is asked to be found (see last row), given the area of the house, simple linear regression (that gives a decent amount of accuracy) can be used. Below is how the data will look when plotted on a graph. It yields an almost straight line, which means the dependant value depends on the independent value, i.e the area of the house matters when the price of the house is being fixed.The basic steps involved in a machine learning problem-  Identify the problem: see if it qualifies to be solved using a Machine Learning algorithm.  Gather the data: The data required can either be collected from a single source or various source, or it could be generated randomly (if it is for a specific purpose) using certain formulas and methods.  Data cleaning: The data gathered may not be clean or structured, make sure it is cleaned, and in a structured or at least semi-structured format.  Package installation: Install the packages that are required to work with the data.  Data loading: Load the data into the Python environment using any IDE (Usually, Spyder is preferred). This is done so that the machine learning algorithm can access the data and perform the operations.  Data cleaning: Data can be cleaned after it has been placed in the Python environment using certain packages and methods, or it can be cleaned before (manually or by applying some logic).  Summarize the data: Understand the terms we are looking at, perform some operations on them, get the type of value, mean, median, variance, and standard deviation, which are insights into the data. This can be done easily by importing packages that have these functions. Data training: In this step, the input dataset is trained by passing it as parameter to the respective algorithm. This is done so that it can predict the output for the not-ever-seen data also known as testing dataset.  Linear Regression application: Apply the Linear Regression algorithm to this data. Data visualization: The data that has interacted with the linear regression algorithm is visualized using many Python packages. Prediction: The predictions are made with the help of the data trained, and are then displayed on the console. Code for Linear Regression using Python Code to implement linear regression using Python  import numpy as np  import matplotlib.pyplot as plt  from sklearn.metrics import mean_squared_error, r2_score  from sklearn.linear_model import LinearRegression    #Random data set generated  np.random.seed(0)  x_dep = np.random.rand(100, 1)  y_indep = 5.89 + (2.45)* x_dep + np.random.rand(100, 1)    #The model is initialized using LinearRegression that is present in the scikit-learn package  model_of_regression = LinearRegression()    #The data is fit on the model, with the help of training  model_of_regression.fit(x_dep, y_indep)    #The output is predicted   predicted_y_val = model_of_regression.predict(x_dep)    #The model built is evaluated using mean squared error parameter  rmse = mean_squared_error(y_indep, predicted_y_val)    r2 = r2_score(y_indep, predicted_y_val)    print("The value of slope is: ", model_of_regression.coef_)  print("The intercept value is: ", model_of_regression.intercept_)  print("The Root Mean Squared Error value (RMSE) is: ", rmse)    #The data is visualized usign the matplotlib library  plt.scatter(x_dep, y_indep, s=8)  plt.xlabel('X-axis')  plt.ylabel('Y-axis')    #The values are predicted and plotted on a graph and displayed on the screen  plt.plot(x_dep, predicted_y_val, color='r')  plt.show() Output:Code review-Explanation of every step  The required packages are imported using the ‘import’ keyword.  Make sure that ‘scikit-learn’ package is installed before working on this code.  Instead of using precooked data, we are generating data here, using the ‘random’ function.  A seed is defined, and a formula is created that assumes random values for variables and generates random data.  The ‘LinearRegression’ function, present in the ‘scikit-learn’ package is initiated so as to create a model, and one of the functions inside the LinearRegression package-namely ‘fit’ is called by passing the dependant and the independent values.  The ‘predict’ function from the LinearRegression is used to predict the value that is not known for a given independent value. After the model is built with the data, it is important to see how it has fared.  Hence, an attribute named RMSE (Root Mean Squared Error) is used to see the difference between the value that had to actually be predicted and the value that was predicted.  Next, the data is visualized on the screen using a package named ‘matplotlib’.  ConclusionIn all, Machine Learning is a game changer when it comes to identifying its use cases, and applying the right kind of algorithm in the right place, with the right amount of data, and right computational resources and power. Linear Regression is just a simple algorithm of where Machine Learning begins to show its aspects. Usually, the Python language is used to implement Machine Learning algorithms, but other new languages could also be used.  
Getting Started With Machine Learning With Python:...

Takeaways from the article This article helps y... Read More