Search

A Peek Into the World of Data Science

Touted as the sexiest job in the 21st century, back in 2012 by Harvard Business Review, the data science world has since received a lot of attention across the entire world, cutting across industries and fields. Many people wonder what the fuss is all about. At the same time, others have been venturing into this field and have found their calling.  Eight years later, the chatter about data science and data scientists continues to garner headlines and conversations. Especially with the current pandemic, suddenly data science is on everyone’s mind. But what does data science encompass? With the current advent of technology, there are terabytes upon terabytes of data that organizations collect daily. From tracking the websites we visit - how long, how often - to what we purchase and where we go - our digital footprint is an immense source of data for a lot of businesses. Between our laptops, smartphones and our tablets - almost everything we do translates into some form of data.  On its own, this raw data will be of no use to anyone. Data science is the process that repackages the data to generate insights and answer business questions for the organization. Using domain understanding, programming and analytical skills coupled together with business sense and know-how, existing data is converted to provide actionable insights for an organization to drive business growth. The processed data is what is worth its weight in gold. By using data science, we can uncover existing insights and behavioural patterns or even predict future trends.  Here is where our highly-sought-after data scientists come in.  A data scientist is a multifaceted role in an organization. They have a wide range of knowledge as they need to marry a plethora of methods, processes and algorithms with computer science, statistics and mathematics to process the data in a format that answers the critical business questions meaningfully and with actionable insights for the organization. With these actionable data, the company can make plans that will be the most profitable to drive their business goals.  To churn out the insights and knowledge that everyone needs these days, data science has become more of a craft than a science despite its name. The data scientists need to be trained in mathematics yet have some creative and business sense to find the answers they are looking in the giant haystack of raw data. They are the ones responsible for helping to shape future business plans and goals.  It sounds like a mighty hefty job, doesn’t it? It is also why it is one of the most sought after jobs these days. The field is rapidly evolving, and keeping up with the latest developments takes a lot of dedication and time, in order to produce actionable data that the organizations can use.  The only constant through this realm of change is the data science project lifecycle. We will discuss briefly below on the critical areas of the project lifecycle. The natural tendency is to envision that it is a circular process immediately - but there will be a lot of working back and forth within some phases to ensure that the project runs smoothly.  Stage One: Business Understanding  As a child, were you one of those children that always asked why? Even when the adults would give you an answer, you followed up with a “why”? Those children will have probably grown up to be data scientists as it seems, their favourite question is: Why? By asking the why - they will get to know the problem that needs to be solved and the critical question will emerge. Once there is a clear understanding of the business problem and question, then the work can begin. Data scientists want to ensure that the insights that come from this question are supported by data and will allow the business to achieve the desired results. Therefore, the foundation stone to any data science project is in understanding the business.  Stage Two: Data Understanding  Once the problem and question have been confirmed, you need to start laying out the objectives of this project by determining the required variables to be predicted. You must know what you need from the data and what the data should address. You must collate all the information and data, which can be reasonably difficult. An agreement over the sources and the requirements of the data characteristics needs to be reached before moving forward.  Through this process, an efficient and insightful understanding is required of how the data can and will be used for the project. This operational management of the data is vital, as the data that is sourced at this stage will define the project and how effective the solutions will be in the end.  Stage Three: Data Preparation  It has been said quite often that a bulk of a data scientist’s time is spent in preparing the data for use. In this report from CrowdFlower in 2016, the percentage of time spent on cleaning and organizing data is pegged at 60%. That is more than half their day!  Since data comes in various forms, and from a multitude of sources, there will be no standardization or consistency throughout the data. Raw data needs to be managed and prepared - with all the incomplete values and attributes fixed, and all deconflicting values in the data eliminated. This process requires human intervention as you must be able to discern which data values are required to reach your end goal. If the data is not prepared according to the business understanding, the final result might not be suitable to address the issue.  Stage Four: Modeling Once the tedious process of preparation is over, it is time to get the results that will be required for this project lifecycle. There are various types of techniques that can be used, ranging from decision-tree building to neural network generation. You must decide which would be the best technique based on the question that needs to be answered. If required, multiple modeling techniques can be used; where each task must be performed individually. Generally, modeling techniques are applied more than once (per process), and there will be more than one technique used per project.  With each technique, parameters must be set based on specific criteria. You, as the data scientist, must apply your knowledge to judge the success of the modeling and rank the models used based on the results; according to pre-set criteria. Stage Five: Evaluation Once the results are churned out and extracted, we then need to refer back to the business query that we talked about in Stage One and decide if it answers the question raised; and if the model and data meet the objectives that the data science project has set out to address. The evaluation also can unveil other results that are not related to the business question but are good points for future direction or challenges that the organization might face. These results should be tabled for discussion and used for new data science projects. Final Stage: Deployment  This is almost the finishing line!  Now with the evaluated results, the team would need to sit down and have an in-depth discussion on what the data shows and what the business needs to do based on the data. The project team should come up with a suitable plan for deployment to address the issue. The deployment will still need to be monitored and assessed along the way to ensure that the project will be a successful one; backed by data.  The assessment would normally restart the project lifecycle; bringing you full circle.  Data is everywhere  In this day and age, we are surrounded by a multitude of data science applications as it crosses all industries. We will focus on these five industries, where data science is making waves. Banking & Finance  Financial institutions were the earliest adopters of data analytics, and they are all about data! From using data for fraud or anomaly detection in their banking transactions to risk analytics and algorithmic trading - one will find data plays a key role in all levels of a financial institution.  Risk analytics is one of the key areas where data science is used; as financial institutions depend on it to make strategic decisions for the financial health of the business. They need to assess each risk to manage and optimize their cost.  Logistics & Transportation  The world of logistics is a complex one. In a production line, raw materials sometimes come from all over the world to create a single product. A delay of any of the parts will affect the production line, and the output of stock will be affected drastically. If logistical delays can be predicted, the company can adjust quickly to another alternative to ensure that there will be no gap in the supply chain, ensuring that the production line will function at optimum efficiency.  Healthcare  2020 has been an interesting one. It has been a battle of a lifetime for many of us. Months have passed, and yet the virus still rages on to wreak havoc on lives and economies. Many countries have turned to data science applications to help with their fight against COVID-19. With so much data generated daily, people and governments need to know various things such as:  Epidemiological clusters so people can be quarantined to stop the spread of the virus tracking of symptoms over thousands of patients to understand how the virus transmits and mutates to find vaccines and  solutions to mitigate transmission. Manufacturing  In this field, millions can be on the line each day as there are so many moving parts that can cause delays, production issues, etc. Data science is primarily used to boost production rates, reduce cost (workforce or energy), predict maintenance and reduce risks on the production floor.  This allows the manufacturer to make plans to ensure that the production line is always operating at the optimum level, providing the best output at any given time.  Retail (Brick & Mortar, Online)  Have you ever wondered why some products in a shop are placed next to each other or how discounts on items work? All those are based on data science.  The retailers track people’s shopping routes, purchases and basket matching to work out details like where products should be placed; or what should go on sale and when to drive up the sales for an item. And that is just for the instore purchases.  Online data tracks what you are buying and suggests what you might want to buy next based on past purchase histories; or even tells you what you might want to add to your cart. That’s how your online supermarket suggests you buy bread if you have a jar of peanut butter already in your cart.  As a data scientist, you must always remember that the power in the data. You need to understand how the data can be used to find the desired results for your organization. The right questions must be asked, and it has become more of an art than a science. Image Source: Data science Life Cycle

A Peek Into the World of Data Science

10K
A Peek Into the World of Data Science

Touted as the sexiest job in the 21st century, back in 2012 by Harvard Business Review, the data science world has since received a lot of attention across the entire world, cutting across industries and fields. Many people wonder what the fuss is all about. At the same time, others have been venturing into this field and have found their calling.  

Eight years later, the chatter about data science and data scientists continues to garner headlines and conversations. Especially with the current pandemic, suddenly data science is on everyone’s mind. But what does data science encompass? 

With the current advent of technology, there are terabytes upon terabytes of data that organizations collect daily. From tracking the websites we visit - how long, how often - to what we purchase and where we go - our digital footprint is an immense source of data for a lot of businesses. Between our laptops, smartphones and our tablets - almost everything we do translates into some form of data.  

On its own, this raw data will be of no use to anyone. Data science is the process that repackages the data to generate insights and answer business questions for the organization. Using domain understanding, programming and analytical skills coupled together with business sense and know-how, existing data is converted to provide actionable insights for an organization to drive business growth. The processed data is what is worth its weight in gold. By using data science, we can uncover existing insights and behavioural patterns or even predict future trends.  

Here is where our highly-sought-after data scientists come in.  

A data scientist is a multifaceted role in an organization. They have a wide range of knowledge as they need to marry a plethora of methods, processes and algorithms with computer science, statistics and mathematics to process the data in a format that answers the critical business questions meaningfully and with actionable insights for the organization. With these actionable data, the company can make plans that will be the most profitable to drive their business goals.  

To churn out the insights and knowledge that everyone needs these days, data science has become more of a craft than science despite its name. The data scientists need to be trained in mathematics yet have some creative and business sense to find the answers they are looking in the giant haystack of raw data. They are the ones responsible for helping to shape future business plans and goals.  

It sounds like a mighty hefty job, doesn’t it? It is also why it is one of the most sought after jobs these days. The field is rapidly evolving, and keeping up with the latest developments takes a lot of dedication and time, in order to produce actionable data that the organizations can use.  

The only constant through this realm of change is the data science project lifecycle. We will discuss briefly below on the critical areas of the project lifecycle. The natural tendency is to envision that it is a circular process immediately - but there will be a lot of working back and forth within some phases to ensure that the project runs smoothly.  

Stage One: Business Understanding  

As a child, were you one of those children that always asked why? Even when the adults would give you an answer, you followed up with a “why”? Those children will have probably grown up to be data scientists as it seems, their favourite question is: Why? 

By asking the why - they will get to know the problem that needs to be solved and the critical question will emerge. Once there is a clear understanding of the business problem and question, then the work can begin. Data scientists want to ensure that the insights that come from this question are supported by data and will allow the business to achieve the desired results. Therefore, the foundation stone to any data science project is in understanding the business.  

Stage Two: Data Understanding  

Once the problem and question have been confirmed, you need to start laying out the objectives of this project by determining the required variables to be predicted. You must know what you need from the data and what the data should address. You must collate all the information and data, which can be reasonably difficult. An agreement over the sources and the requirements of the data characteristics needs to be reached before moving forward.  

Through this process, an efficient and insightful understanding is required of how the data can and will be used for the project. This operational management of the data is vital, as the data that is sourced at this stage will define the project and how effective the solutions will be in the end.  

Stage Three: Data Preparation  

It has been said quite often that a bulk of a data scientist’s time is spent in preparing the data for use. In this report from CrowdFlower in 2016, the percentage of time spent on cleaning and organizing data is pegged at 60%. That is more than half their day!  

Since data comes in various forms, and from a multitude of sources, there will be no standardization or consistency throughout the data. Raw data needs to be managed and prepared - with all the incomplete values and attributes fixed, and all deconflicting values in the data eliminated. This process requires human intervention as you must be able to discern which data values are required to reach your end goal. If the data is not prepared according to the business understanding, the final result might not be suitable to address the issue.  

Stage Four: Modeling 

Once the tedious process of preparation is over, it is time to get the results that will be required for this project lifecycle. There are various types of techniques that can be used, ranging from decision-tree building to neural network generation. You must decide which would be the best technique based on the question that needs to be answered. If required, multiple modeling techniques can be used; where each task must be performed individually. Generally, modeling techniques are applied more than once (per process), and there will be more than one technique used per project.  

With each technique, parameters must be set based on specific criteria. You, as the data scientist, must apply your knowledge to judge the success of the modeling and rank the models used based on the results; according to pre-set criteria. 

Stage Five: Evaluation 

Once the results are churned out and extracted, we then need to refer back to the business query that we talked about in Stage One and decide if it answers the question raised; and if the model and data meet the objectives that the data science project has set out to address. 

The evaluation also can unveil other results that are not related to the business question but are good points for future direction or challenges that the organization might face. These results should be tabled for discussion and used for new data science projects. 

Final Stage: Deployment  

This is almost the finishing line!  

Now with the evaluated results, the team would need to sit down and have an in-depth discussion on what the data shows and what the business needs to do based on the data. The project team should come up with a suitable plan for deployment to address the issue. The deployment will still need to be monitored and assessed along the way to ensure that the project will be a successful one; backed by data.  

The assessment would normally restart the project lifecycle; bringing you full circle.  

Data is everywhere  

In this day and age, we are surrounded by a multitude of data science applications as it crosses all industries. We will focus on these five industries, where data science is making waves. 

Banking & Finance  

Financial institutions were the earliest adopters of data analytics, and they are all about data! From using data for fraud or anomaly detection in their banking transactions to risk analytics and algorithmic trading - one will find data plays a key role in all levels of a financial institution.  

Risk analytics is one of the key areas where data science is used; as financial institutions depend on it to make strategic decisions for the financial health of the business. They need to assess each risk to manage and optimize their cost.  

Logistics & Transportation  

The world of logistics is a complex one. In a production line, raw materials sometimes come from all over the world to create a single product. A delay of any of the parts will affect the production line, and the output of stock will be affected drastically. If logistical delays can be predicted, the company can adjust quickly to another alternative to ensure that there will be no gap in the supply chain, ensuring that the production line will function at optimum efficiency.  

Healthcare  

2020 has been an interesting one. It has been a battle of a lifetime for many of us. Months have passed, and yet the virus still rages on to wreak havoc on lives and economies. Many countries have turned to data science applications to help with their fight against COVID-19. 

With so much data generated daily, people and governments need to know various things such as:  

  • Epidemiological clusters so people can be quarantined to stop the spread of the virus 
  • tracking of symptoms over thousands of patients to understand how the virus transmits and mutates to find vaccines and  
  • solutions to mitigate transmission. 

Manufacturing  

In this field, millions can be on the line each day as there are so many moving parts that can cause delays, production issues, etc. Data science is primarily used to boost production rates, reduce cost (workforce or energy), predict maintenance and reduce risks on the production floor.  

This allows the manufacturer to make plans to ensure that the production line is always operating at the optimum level, providing the best output at any given time.  

Retail (Brick & Mortar, Online)  

Have you ever wondered why some products in a shop are placed next to each other or how discounts on items work? All those are based on data science.  

The retailers track people’s shopping routes, purchases and basket matching to work out details like where products should be placed; or what should go on sale and when to drive up the sales for an item. And that is just for the instore purchases.  

Online data tracks what you are buying and suggests what you might want to buy next based on past purchase histories; or even tells you what you might want to add to your cart. That’s how your online supermarket suggests you buy bread if you have a jar of peanut butter already in your cart.  

As a data scientist, you must always remember that the power in the data. You need to understand how the data can be used to find the desired results for your organization. The right questions must be asked, and it has become more of an art than a science. 

Data science Life Cycle

Image Source: Data science Life Cycle

Felicia

Felicia Norfor

Author

With over 15 years of experience in the marketing (digital and physical), communications and content creation fields, she has been dipping her toes into the data world lately. Working alongside analytics people is an eye-opening experience, making her realise how essential data teams are and wonders how she has lived without them for so long.

She currently provides consultancy services for marketing, communications and engagement events, while marrying data extensively into her projects. Otherwise, she can be found at knees deep, learning from these data peeps at Algoritmo Lab, a data science consultancy start-up with offices in Singapore and India.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Activation Functions for Deep Neural Networks

The Universal Approximation Theorem Any predictive model is a mathematical function, y = f(x) that can map the features (x) to the target variable (y). The function, f(x) can be a linear function or it can be a fairly complex nonlinear function. The function, f(x) can help predict with high accuracy depending on the distribution of the data. In the case of neural networks, it would also depend on the type of network architecture that's employed. The Universal Approximation Theorem says that irrespective of what the f(x) is, a neural network model can be built that can approximately deliver the desired result. In order to build a proper neural network architecture, let us take a look at the activation functions. What are Activation Functions? Simply put, activation functions define the output of neurons given a certain set of inputs. Activation functions are mathematical functions that are added to neural network models to enable the models to learn complex patterns. An activation function takes in the output from the previous layer, passes it through the mathematical function (mostly non-linear functions) to convert it into some form, that can be considered as an input for the next computation layer. Activation functions determine the final accuracy of a network model while also contributing to the computational efficiency of building the model. Why do we need Activation Functions? In a neural network, if we add the hidden layers as the weighted sum of the inputs, this would translate into a linear function which is equivalent to a linear regression model. Image source: Neural Network ArchitectureIn the above diagram, we see the hidden layer is simply the weighted sum of the inputs from the input layer. For example, b1 = bw1 + a1w1 + a2w3 which is nothing but a linear function.Linear combination of linear functions is a linear function. So no matter whatever number of linear function we add, or increase the hidden linear layers, the output would still be linear.However in the real world, more often than not, we need to model data which is non-linear and way more complex. Adding non-linear functions allow these non-linear decision boundaries to be built into the model.Multi-layer neural network models can classify linearly inseparable classes. However, in order to do so, we need the network to be transformed to a nonlinear function. For this nonlinear transformation to happen, we would pass the weighted sum of the inputs through an activation function. These activation functions are nonlinear functions which are applied at the hidden layers. Each hidden layer can have different activation functions, though mostly all neurons in each layer will have the same activation function.Additionally, by applying non-linear activation function to the neurons it can also act as gate and selectively switch on or off a neuron.Types of Activation Functions? In this section we discuss the following: Linear Function Threshold Activation Function Bipolar Activation Function Logistic Sigmoid Function Bipolar Sigmoid Function Hyperbolic Tangent Function Rectified Linear Unit Function Swish Function (proposed by Google Brain - a deep learning artificial intelligence research team at Google) Linear Function: g(x) = xA linear function is similar to a straight line, y=mx. Irrespective of the number of hidden layers, if all the layers are linear in nature, then the final output is also simply a linear function of the input values. Hence we take a look at the other activation functions which are non-linear in nature and can help learn complex patterns. Note: This function is useful when we want to model a wide range in the regression network output.Threshold Activation Function: (sign(x) +1)/2In this case, if the input is above a certain value, the neuron is activated. It is to note that this function provides either a 1 or a 0 as the output. In effect, the step function divides the input space into two halves such that one side of the hyperplane represents class 0 and the other side of the hyperplane represents class 1. However, if we need to classify certain inputs into more than 2 categories, a Threshold-Activation function is not a suitable one. Because of its binary output nature, this function is also known as binary-step activation function.Threshold Activation FunctionDrawback:Can be used for binary classification only. It is not suited for multi class classification problems.This function does not support learning, i.e., when you fine tune the NN, you would not know if by changing the weights slightly the loss has reduced or changed at all.Bipolar Activation Function: This is similar to the threshold function that was explained above. However, this activation function will return an output of either -1 or +1 based on a threshold.Bipolar Activation FunctionLogistic Sigmoid Function: One of the most frequently used activation functions is the Logistic Sigmoid Function. Its output ranges between 0 and 1 and is plotted as an ‘S’ shaped graph.Logistic Sigmoid FunctionThis is a nonlinear function and is characterised by a small change in x that would lead to large change in y. This activation function is generally used for binary classification where the expected output is 0 or 1. This activation function provides an output between 0 and 1 and a default threshold of 0.5 is considered to convert the continuous output to 0 or 1 for classifying the observationsAnother variation of the Logistic Sigmoid function is the Bipolar Sigmoid Function. This activation function is a rescaled version of the Logistic Sigmoid Function which provides an output in the range of -1 to +1.Bipolar Logistic FunctionDrawback: Slow convergence - Gradients only in the active region enable learning. When the neurons fire in the saturation region(the top and bottom part of the S curve), the gradients are very small or close to zero. Hence the training becomes slow and leads to slow convergence.Vanishing Gradient problem -  When the neurons fire in the saturation region, i.e., if the output of the previous layer is in the saturation region, the gradients will get close to zero not enable learning, i.e., even large changes in parameter(weights) leads to very small change in the output.Hyperbolic Tangent Function: This activation function is quite similar to the sigmoid function. Its output ranges between -1 to +1. So the output is zero centred, hence makes weight initialization easier.Hyperbolic Tangent FunctionDrawback:This too suffers from the vanishing gradient problem.Slightly more expensive to computeRectified Linear Activation Function: This activation function, also known as ReLU, outputs the input if it is positive, else will return zero. That is to say, if the input is zero or less, this function will return 0 or will return the input itself. This function mostly behaves like a linear function because of which the computational simplicity is achieved.This activation function has become quite popular and is often used because of its computational efficiency compared to sigmoid and the hyperbolic tangent function that helps the model converge faster.ReLU has a better convergence than sigmoid and tanh(x) functions, as there are no saturation regions in ReLU. If the input of the previous layer is positive, it simply passes it as is and if the input is negative, it simply clips it.Another critical point to note is that while the sigmoid & the hyperbolic tangent function tries to approximate a zero value, the Rectified Linear Activation Functions can return true zero.Rectified Linear Units Activation FunctionOne disadvantage of ReLU is that when the inputs are close to zero or negative, the gradient of the function becomes zero. This causes a problem for the algorithm while performing back-propagation and in turn the model cannot converge. If the dataset is such that the input for a particular neuron is a negative  number then during backward propagation, the gradient will always be zero. Since the gradient is zero the weights for those neurons will never be updated and there will be no learning. If the weights are not updated, we would get same negative numbers for those neurons. Thus, no matter what those neurons would be dead. This is commonly termed as the “Dying” ReLU problem. Hence when using ReLU, one should keep track of the fraction of dead neurons.There are a few variations of the ReLU activation function, such as, Noisy ReLU, Leaky ReLU, Parametric ReLU and Exponential Linear Units (ELU) Leaky ReLU which is a modified version of ReLU, helps solve the “Dying” ReLU problem. It helps perform back-propagation even when the inputs are negative. Leaky ReLU, unlike ReLU, defines a small linear component of x when x is a negative value. With this change in leaky ReLU, the gradient can be of non-zero value instead of zero thus avoiding dead neurons. However, this might also bring in a challenge with Leaky ReLU when it comes to predicting negative values.  Exponential Linear Unit (ELU) is another variant of ReLU, which unlike ReLU and leaky ReLU, uses a log curve instead of a straight line to define the negative values. Swish Activation Function: Swish is a new activation function that has been proposed by Google Brain. While ReLU returns zero for negative values, Swish doesn’t return a zero for negative inputs. Swish is a self-gating technique which implies that while normal gates require multiple scalar inputs, self-gating technique requires a single input only. Swish has certain properties - Unlike ReLU, Swish is a smooth and non-monotonic function which makes it more acceptable compared to ReLU. Swish is unbounded above and bounded below.  Swish is represented as x · σ(βx), where σ(z) = (1 + exp(−z))−1 is the sigmoid function and β is a constant or a trainable parameter.  Activation functions in deep learning and the vanishing gradient descent problem Gradient based methods are used by various algorithms to train the models. Neural networks algorithm uses stochastic gradient descent method to train the model. A neural network algorithm randomly assigns weights to the layers and once the output is predicted, it calculates the prediction errors. It uses these errors to estimate a gradient that can be used to update the weights in the network. This is done in order to reduce the prediction errors. The error gradient is updated backward from the output layer to the input layer.  It is preferred to build a neural network model with a larger number of hidden layers. With more hidden layers, the neural network model can achieve enhanced capability to perform more accurately.  One problem with too many layers is that the gradient diminishes pretty fast as it moves from the output layer to the input layer, i.e. during the back propagation, in order to get the update for the weights, we multiply a lot many gradients and jacobians. If the largest singular value of these matrices is less than one, we will get very small number when we multiply these less than one numbers. If we get very small number, the gradients would diminish. When we update the weight with this gradient, the update is very low. By the time it reaches the other end backward, it is quite possible that the error might get too small to make any effect on the model performance improvement. Basically, this is a situation where some difficulty is faced while training a neural network model using gradient based methods.  This is known as the vanishing gradient descent problem. Gradient based methods might face this challenge when certain activation functions are used in the network.  In deep neural networks, various activations functions are used. However when training deep neural network models, the vanishing gradient descent problems can demonstrate unstable behavior.  Various workaround solutions have been proposed to solve this problem. The most commonly used activation function is the ReLU activation function that has proven to perform way better than any other previously existing activation functions like sigmoid or hyperbolic tangent. As mentioned above, Swish improves upon ReLU being a smooth and non-monotonic function. However, though the vanishing gradient descent problem is much less severe in Swish, it does not completely avoid the vanishing gradient descent problem. To tackle this problem, a new activation function has been proposed. “The activation function in the neural network is one of the important aspects which facilitates the deep training by introducing the nonlinearity into the learning process. However, because of zero-hard rectification, some of the existing activation functions such as ReLU and Swish miss to utilize the large negative input values and may suffer from the dying gradient problem. Thus, it is important to look for a better activation function which is free from such problems.... The proposed LiSHT activation function is an attempt to scale the non-linear Hyperbolic Tangent (Tanh) function by a linear function and tackle the dying gradient problem… A very promising performance improvement is observed on three different types of neural networks including Multi-layer Perceptron (MLP), Convolutional Neural Network (CNN) and Recurrent Neural Network like Long-short term memory (LSTM).“   - Swalpa Kumar Roy, Suvojit Manna, et al, Jan 2019 In a paper published here, Swalpa Kumar Roy, Suvojit Manna, et al proposes a new non-parametric activation function - the Linearly Scaled Hyperbolic Tangent (LiSHT) - for Neural Networks that attempts to tackle the vanishing gradient descent problem. 
8516
Activation Functions for Deep Neural Networks

The Universal Approximation Theorem Any predictiv... Read More

Top Job Roles With Their Salary Data in the World of Data Science for 2020–2021

Data Science requires the expertise of professionals who possess the skill of collecting, structuring, storing, handling and analyzing data, allowing individuals and organizations to make decisions based on insights generated from the data. Data science is woven into the fabric of our daily lives in myriad ways that we may not even be aware of; starting from the online purchases we make, our social media feeds, the music we listen to or even the movie recommendations that we are shown online.  For several years in a row, the job of a data scientist has been hailed as the “hottest job of the 21st century”. Data scientists are among the highest paid resources in the IT industry. According to Glassdoor, the average data scientist’s salary is $113,436. With the growth of data, the demand for data science job roles in companies has been rising at an accelerated pace.  How Data Science is a powerful career choice The landscape of a data science job is promising and full of opportunities spanning different industries. The nature of the job allows an individual to take on flexible remote jobs and also to be self-employed.  The field of data science has grown exponentially in a very short time, as companies have come to realize the importance of gathering huge volumes of data from websites, devices, social media platforms and other sources, and using them for business benefits. Once the data is made available, data scientists use their analytical skills, evaluate data and extract valuable information that allows organizations to enhance their innovations. A data scientist is responsible for collecting, cleansing, modifying and analyzing data into meaningful insights. In the first phase of their career, a data scientist generally works as a statistician or data analyst. Over many years of experience, they evolve to be data scientists.  The ambit of data has been increasing rapidly which has urged companies to actively recruit data scientists to harness and leverage insights from the huge quantities of valuable data available, enabling efficiency in processes and operations and driving sales and growth.  In the future, data may also emerge as the turning point of the world economy. So, pursuing a career in data science would be very useful for a computer enthusiast, not only because it pays well but also since it is the new trend in IT. According to the Bureau of Labor Statistics (BLS), jobs for computer and information research scientists, as well as data scientists are expected to grow by 15 percent by the year 2028. Who is a Data Scientist & What Do They Do? Data Scientists are people with integral analytical data expertise together with complex problem-solving skills, besides the curiosity to explore a wide range of emerging issues.  They are considered to be the best of both the sectors – IT and business, which makes them extremely skilled individuals whose job roles straddle the worlds of computer science, statistics, and trend analysis. Because of this surging demand for data identification and analysis in various tech fields like AI, Machine Learning, and Data Science, the salary of a data scientist is one of the highest in the world. Requisite skills for a data scientist Before we see the different types of jobs in the data analytics field, we must be aware of the prerequisite skills that make up the foundation of a data scientist: Understanding of data – As the name suggests, Data Science is all about data. You need to understand the language of data and the most important question you must ask yourself is whether you love working with data and crunching numbers. And if your answer is “yes”, then you’re on the right track. Understanding of algorithms or logic – Algorithms are a set of instructions that are given to a computer to perform a particular task. All Machine Learning models are based on algorithms, so it is quite an essential prerequisite for a would-be data scientist to understand the logic behind it.  Understanding of programming – To be an expert in data science, you do not need to be an expert coder. However, you should have the foundational programming knowledge which includes variables, constants, data types, conditional statements, IO functions, client/server, Database, API, hosting, etc. If you feel comfortable working with these and you have your coding skills sorted, then you’re good to go. Understanding of Statistics – Statistics is one of the most significant areas in the field of Data Science. You should be well aware of terminologies such as mean, median, mode, standard deviation, distribution, probability, Bayes’ theorem, and different Statistical tests like hypothesis testing, chi-square, ANOVA, etc. Understanding of Business domain:  If you do not have an in-depth working knowledge of the business domain, it will not really prove to be an obstacle in your journey of being a data scientist. However, if you have the primitive understanding of the specific business area you are working for, it will be an added advantage that can take you ahead. Apart from all the above factors, you need to have good communication skills which will help the entire team to get on the same page and work well together.Data Science Job Roles  Data science experts are in demand in almost every job sector, and are not confined to the IT industry alone.  Let us look at some major job roles, associated responsibilities , and the salary range: 1. Data ScientistsA Data Scientist’s job is as exciting as it is rewarding. With the help of Machine Learning, they handle raw data and analyze it with various algorithms such as regression, clustering, classification, and so on. They are able to arrive at insights that are essential for predicting and addressing complex business problems.  Responsibilities of Data Scientists The responsibilities of Data Scientists are outlined below: Collecting huge amounts of organized and unorganized data and converting them into useful insights. Using analytical skills like text analytics, machine learning, and deep learning to identify potential solutions which will help in the growth of organizations. Following a data-driven approach to solve complex problems.  Enhancing data accuracy and efficiency by cleansing and validating data. Using data visualization to communicate significant observations to the organization’s stakeholders. Data Scientists Salary Range According to Glassdoor, the average Data Scientist salary is $113,436 per annum. The median salary of an entry-level professional can be around $95,000 per annum. However, early level data scientists with 1 to 4 years' experience can get around $128,750 per annum while the median salary for those with more experience ranging around 5 to 9 years  can rise to an average of $165,000 per annum. 2. Data Engineers  A Data Engineer is the one who is responsible for building a specific software infrastructure for data scientists to work. They need to have in-depth knowledge of technologies like Hadoop and Big Data such as MapReduce, Hive, and SQL. Half of the work of Data Engineers is Data Wrangling and it is advantageous if they have a software engineering background. Responsibilities of Data Engineers  The responsibilities of Data Engineers are described below: Collecting data from different sources and then consolidating and cleansing it. Developing essential software for extracting, transforming, and loading data using SQL, AWS, and Big Data. Building data pipelines using machine learning algorithms and statistical techniques. Developing innovative ways to enhance data efficiency and quality. Developing, testing and maintaining data architecture. Required Skills for Data Engineers  There are certain skill sets that data engineers need to have: Strong skills in analytics to manage and work with massive unorganized datasets. Powerful programming skills in trending languages like Python, Java, C++, Ruby, etc. Strong knowledge of database software like SQL and experience in relational databases. Managerial and organizational skills along with fluency in various databases.  Data Engineers’ Salary Range According to Glassdoor, the average salary of a Data Engineer is $102,864 in the USA. Reputed companies like Amazon, Airbnb, Spotify, Netflix, IBM value and pay high salaries to data engineers. Entry-level data and mid-range data engineers get an average salary between $110,000 and $137,770 per annum. However, with experience, a data engineer can get up to $155,000 in a year. 3. Data Analyst As the name suggests, the job of a Data Analyst is to analyze data. A data analyst collects, processes, and executes statistical data analyses which help business users to develop meaningful insights. This process requires creating systems using programming languages like Python, R or SAS. Companies ranging from IT, healthcare, automobile, finance, insurance employ Data Analysts to run their businesses efficiently.  Responsibilities of Data Analysts  The responsibilities of Data Analysts are described below: Identifying correlations and gathering valuable patterns through data mining and analyzing data. Working with customer-centric algorithms and modifying them to suit individual customer demands. Solving certain business problems by mapping data from numerous sources and tracing them. Creating customized models for customer-centric market strategies, customer tastes, and preferences. Conducting consumer data research and analytics by deploying statistical analysis. Data Analyst Salary Range According to Glassdoor, the national average salary of a Data Analyst is $62,453 in the United States. The salaries of an entry-level data analyst start at  $34,5000 per year or $2875 per month.  Glassdoor states that a junior data analyst earns around $70,000 per year and experienced senior data analysts can expect to be paid around $107,000 per year which is roughly $8916 per month. Key Reasons to Become a Data Scientist Becoming a Data Scientist is a dream for many data enthusiasts. There are some basic reasons for this: 1. Highly in-demand field The job of Data Science is hailed as one of the most sought after jobs for 2020 and according to an estimate, it is predicted that this field would generate around 11.5 million jobs by the year 2026. The demand for expertise in data science is increasing while the supply is too low.  This shortage of qualified data scientists has escalated their demand in the market. A survey by the MIT Sloan Management Review indicates that 43 percent of companies report that a major challenge to their growth has been a lack of data analytic skills. 2. Highly Paid & Diverse Roles Since data analytics form the central part of decision-making, companies are willing to hire larger numbers of data scientists who can help them to make the right decisions that will boost business growth. Since it is a less saturated area with a mid-level supply of talents, various opportunities have emerged that require diverse skill sets. According to Glassdoor, in the year 2016,  data science was the highest-paid field across industries. 3. Evolving workplace environments With the arrival of technologies like Artificial Intelligence and Robotics which fall under the umbrella of data science, a vast majority of manual tasks have been replaced with automation.  Machine Learning has made it possible to train machines to perform repetitive tasks , freeing up humans to focus on critical problems that need their attention. Many new and exciting technologies have emerged within this field such as Blockchain, Edge Computing, Serverless Computing, and others.  4. Improving product standards The rigorous use of Machine Learning algorithms for regression, classification recommendation problems like decision trees, random forest, neural networks, naive Bayes etc has boosted the customer experiences that companies desire to have. One of the best examples of such development is the E-commerce sites that use intelligent Recommendation Systems to refer products and provide customer-centric insights depending upon their past purchases. Data Scientists serve as a trusted adviser to such companies by identifying the preferred target audience and handling marketing strategies. 5. Helping the world In today’s world, almost everything revolves around data. Data Scientists extract hidden information from massive lumps of data which helps in decision making across industries ranging from finance and healthcare to manufacturing, pharma and engineering . Organizations are equipped with data driven insights that boost productivity and enhance growth, even as they optimize resources and mitigate potential risks. Data Science catalyzes innovation and research, bringing positive changes across the world we live in. Factors Affecting a Data Scientist’s Salary The salaries of Data Scientists can depend upon several factors. Let us study them one by one and understand their significance: Data Scientist Salary by Location The number of job opportunities and the national data scientist salary for data innovators is the highest in Switzerland in the year 2020, followed by the Netherlands and United Kingdom. However, since Silicon Valley in the United States is the hub of new technological innovations, it is considered to generate the most jobs for startups in the world, followed by Bangalore in India. A data scientist’s salary in Silicon Valley or Bangalore is likely to be higher than in other countries. Below are the highest paying countries for data scientist roles along with their average annual data science salary: Switzerland$115,475Netherlands$68,880Germany$64,024United Kingdom$59,781Spain$30,050Italy$37,785Data Scientist Salary by ExperienceA career in the field of data science is very appealing to young IT professionals. Starting salaries are very lucrative, and there is incremental growth in salary  with  experience. Salaries of a data scientist depend on the expertise, as well as the years of experience: Entry-level data scientist salary – The median entry-level salary for a data scientist is around $95,000 per year which is quite high. Mid-level data scientist salary –   The median salary for a mid-level data scientist having experience of around 1 - 4 years is $128,750 per year. If the data scientist is in a managerial position, the average salary rises upto $185,000 per year. Experienced data scientist salary –  The median salary for an experienced data scientist having experience of around 5 - 9 years is $128,750 per year whereas the median salary of an experienced manager is much higher; around $250,000 per year. Data Scientist Salary by Skills There are some core competencies that will help you to shine in your career as a Data Scientist, and if you want to get the edge over your peers you should consider polishing up these skills: Python is the most crucial and coveted skill which data scientists must be familiar with, followed by R. The average salary in the US for  Python programmers is $120,365 per annum. If you are well versed in both Data Science and Big Data, instead of just one among them, your salary is likely to increase by at least 25 percent . The users of innovative technology like the Statistical Analytical System get a salary of around $77,842. On the other hand, users of software analysis software like SPSS have a pay scale of  $61,452 per year. Machine Learning Engineers on the average earn around $111,855 per year. However, with more experience in Machine Learning along with knowledge in Python, you can earn around $146,085 per annum. A Data Scientist with domain knowledge of Artificial Intelligence can earn an annual salary between $100,000 to $150,000. Extra skills in programming and innovative technologies have always been a value-add that can enhance your employability. Pick skills that are in-demand to see your career graph soar. Data Scientist Salary by Companies Some of the highest paying companies in the field of Data Science are tech giants like Facebook, Amazon, Apple, and service companies like McGuireWoods, Netflix or Airbnb.  Below is a list of top companies with the highest paying salaries: McGuireWoods$165,114Amazon$164,114Airbnb$154,879Netflix$147,617 Apple$144,490Twitter$144,341Walmart$144,198Facebook$143,189eBay$143,005Salaries of Other Related Roles Various other job roles associated with Data Science are also equally exciting and rewarding. Let us look at some of them and their salaries: Machine Learning Engineer$114,826Machine Learning Scientist$114,121Applications Architect$113,757Enterprise Architect$110,663Data Architect$108,278Infrastructure Architect$107,309Business Intelligence Developer$81,514Statistician$76,884Conclusion Let us look at what we have learned in this article so far: What is Data Science? The job of a Data Scientist Pre-requisite skills for a Data Scientist Different job roles Key reasons for becoming a Data Scientist Salary depending upon different factors Salary of other related roles The field of Data Science is ripe in terms of opportunities for Data Scientists, Data Engineers, and Data Analysts. The figures mentioned in this article are not set in stone and may vary depending upon the skills you possess, experience you have and various other factors. With more experience and skills, your salary is bound to increase by a certain percentage every year. Data science is a field that will revolutionize the world in the coming years and you can have a share of this very lucrative pie with the right education qualifications, skills, experience and training.  
6596
Top Job Roles With Their Salary Data in the World ...

Data Science requires the expertise of professio... Read More

Role of Statistics in Data Science

Takeaways from this article In this article, we understand why data is important, and talk about the importance of statistics in data analysis and data science. We also understand some basic statistics concepts and terminologies. We see how statistics and machine learning work in sync to give deep insights into data.  We understand the fundamentals behind Bayesian thinking and how Bayesian theorem works. Introduction Data plays a huge role in today’s tech world. All technologies are data-driven, and humongous amounts of data are produced on a daily basis. A data scientist is a professional who is able to analyse data sources, clean and process the data, understand why and how such data has been generated, take insights from it, and make changes such that they profit the organization. These days, everything revolves around data.  Data Cleaning: It deals with gathering the data and structuring it so that it becomes easy to pass this data as input to any machine learning algorithm. This way, redundant, irrelevant data and noise can also be eliminated.  Data Analysis: This deals with understanding more about the data, why the data has yielded certain results, and what can be done to improve it. It also helps calculate certain numerical values like mean, variance, the distributions, and the probability of a certain prediction.  How the basics of statistics will serve as a foundation to manipulate data in data scienceThe basics of statistics include terminologies, and methods of applying statistics in data science. In order to analyze the data, the important tool is statistics. The concepts involved in statistics help provide insights into the data to perform quantitative analysis on it. In addition to this, as a foundation, the basics and working of linear regression and classification algorithms must also be known to a data science aspirant.  Terminologies associated with statistics Population: It is an entire pool of data from where a statistical sample is extracted. It can be visualized as a complete data set of items that are similar in nature.  Sample: It is a subset of the population, i.e. it is an integral part of the population that has been collected for analysis.  Variable: A value whose characteristics such as quantity can be measured, it can also be addressed as a data point, or a data item.  Distribution: The sample data that is spread over a specific range of values.  Parameter: It is a value that is used to describe the attributes of a complete data set (also known as ‘population’). Example: Average, Percentage  Quantitative analysis: It deals with specific characteristics of data- summarizing some part of data, such as its mean, variance, and so on.  Qualitative analysis: This deals with generic information about the type of data, and how clean or structured it is.  How does analyzing data using statistics help gain deep insights into data? Statistics serve as a foundation while dealing with data and its analysis in data science. There are certain core concepts and basics which need to be thoroughly understood before jumping into advanced algorithms.  Not everyone understand the performance metrics of machine learning algorithms like f-score, recall, precision, accuracy, root mean squared error, and so on. Instead, visual representation of the data and the performance of the algorithm on the data serves as a good metric for the layperson to understand the same.  Also, visual representation helps identify outliers, specific trivial patterns, and certain metric summary such as mean, median, variance, that helps in understanding the middlemost value, and how the outlier affects the rest of the data.  Statistical Data Analysis Statistical data analysis deals with the usage of certain statistical tools that need knowledge of statistics. Software can also help with this, but without understanding why something is happening, it is impossible to get considerable work done in statistics and data science.  Statistics deals with data variables that are either univariate or multivariate. Univariate, as the name suggests deals with single data values, whereas multivariate data deals with the multiple number of values. Discriminant data analysis, factor data analysis can be performed on multivariate data. On the other hand, univariate data analysis, Z-test, F-test can be performed if we are dealing with univariate data.  Data associated with statistics is of many types. Some of them have been discussed below. Categorical data represents characteristics of people, such as marital status, gender, food they like, and so on. It is also known as ‘qualitative data’ or ‘yes/no data’. It takes numerical values like ‘1’, ‘2’, where these numbers indicate one or other type of characteristics. These numbers are not mathematically significant, which means it can’t be associated with each other. Continuous data deals with data that is immeasurable, and can’t be counted, which basically continual forms of values are. Predictions from a linear regression are continuous in nature. It is a continuous distribution that is also known as probability density function. On the other hand, discrete values can be measured, counted, and are discontinuous. Predictions from logistic regression are considered to be discrete in nature. Discrete data is non-continuous, and density concept doesn’t come into the picture here. The distribution is known as probability mass function. The Best way to Learn Statistics for Data Science The best way to learn anything is by implementing it, by working on it, by making mistakes and again learning from it.  It is important to understand the concepts, either by going through standard books or well-known websites, before implementing them.  Before jumping into data science, the core statistics concepts like such as regression, maximum likelihood, distributions, priors, posteriors, conditional probability, Bayesian theorem and basics of machine learning have to be understood clearly. Core statistics concepts Descriptive statistics: As the name suggests, it uses the data to give out more information about every aspect of the data with the help of graphs, plots, or numbers. It organizes the data into a structure, and helps think about the attributes that highlight the important parts of the data. Inferential statistics: It deals with drawing inferences/conclusions on the sample data set which is obtained from the population (entire data set) based on the relationship identified between data points in the data set. It helps in generalizing the relationship to the entire dataset. It is important to remember that the dataset drawn from the population is relevant and represents the population accurately. Regression: The term ‘regression’ which is a part of statistics and machine learning, talks about how data can be fit to a line, and how every point from the straight line gives some insights. In terms of machine learning, it can be understood as tasks that can be solved without explicitly being programmed. They discuss how a line can be fit to a given set of data points, and how it can be further extrapolated for the predictions to be done.  Maximum likelihood: It is a method that helps in finding values of parameters for a specific model. The values of the parameters have to be such that the likelihood of the predictions that occur have to be maximum in comparison to the data values that were actually observed. This means the difference between the actual and predicted value has to be less, thereby reducing the error and increasing the accuracy of the predictions.  Note: This concept is generally used with Logistic regression when we are trying to find the output as 0 or 1, yes or no, wherein the maximum likelihood tells about how likely a data point is near to 0 or 1.  Bayesian thinking Bayesian thinking deals with using probability to model the process of sampling, and being able to quantify the uncertainty associated with the data that would be collected.  This is known as prior probability- which means the level of uncertainty that is associated with the data before it is collected to be analysed.  Posterior probability deals with the uncertainty that occurs after the data has been collected.  Machine learning algorithms are usually focussed on giving the best predictions as output with minimal errors, exact probabilities of specific events occurring and so on. Bayes theorem is a way of calculating the probability of a hypothesis (a situation, which might not have occurred in reality) based on our previous experiences and the knowledge we have gained by it. This is considered as a basic concept that needs to be known.  Bayes theorem can be stated as follows: P(hypo | data) = (P(data | hypo) * P(hypo)) / P(data)In the above equation,   P(hypo | data) is the probability of a hypothesis ‘hypo’ when data ‘data’ is given, which is also known as posterior probability.   P(data | hypo) is the probability of data ‘data’ when the specific hypothesis ‘hypo’ is known to be true.   P(hypo) is the probability of a hypothesis ‘hypo’ being true (irrespective of the data in hand), which is also known as prior probability of ‘hypo’.   P(data) is the probability of the data (irrespective of the hypothesis). The idea here is to get the value of the posterior probability, given other data. The posterior probability for a variety of different hypotheses has to be found out, and the probability that has the highest value is selected. This is known as the maximum probable hypothesis, and is also known as the maximum a posteriori (MAP) hypothesis.MAP(hypo) = max(P(hypo | data))If the value of P(hypo | data) is replaced with the value we saw before, the equation would become:MAP(hypo) = max((P(data | hypo) * P(hypo)) / P(data))P(data) is considered as a normalizing term that helps in determining the probability. This value can be safely ignored when required, since it is a constant value. Naïve Bayes classifier   It is an algorithm that can be used with binary or multi-class classification problems. It is a simple algorithm wherein the probability for every hypothesis is simplified.   This is done in order to make the data more traceable. Instead of calculating value of every attribute like P(data1, data2,..,datan|hypo), we assume that every data point is independent of every other data point in the data set when the respective output is given.   This way, the equation becomes:P(data1 | hypo) * P(data2 |hypo) * … * P(data-n| hypo).This way, the attributes would be independent of each other. This classifier performs quite well even in the real world with real data when the assumption of data points being independent of each other doesn’t hold good.  Once a Naïve Bayes classifier has learnt from the data, it stores a list of probabilities in a data structure. Probabilities such as ‘class probability’ and ‘condition probability’ are stored. Training such a model is quick since the probability of every class and its associated value needs to be determined, and this doesn’t involve any optimization processes or changing of coefficient to give better predictions.   Class probability: It tells about the probability of every class that is present in the training dataset. It can be calculated by finding the frequency of values that belongs to each class divided by the total number of values.  Class probability = (number of classes/(number of classes of group 0 + number of classes of group 1)) Conditional probability: It talks about the conditional probability of every input that is associated with a class value. It can be calculated by finding the frequency of every data attribute in the data for a given class, and this can be determined by the number of data values that have that data label/class value.  Conditional probability P(condition | result ) = number of ((values with that condition and values with that result)/ (number of values with that result)) Not just the concept, once the user understands the way in which a data scientist needs to think, they will be able to focus on getting cleaner data, with better insights that would lead to performing better analysis, which in turn would give great results.  Introduction to Statistical Machine Learning The methods used in statistics are important to train and test the data that is used as input to the machine learning model. Some of these include outlier/anomaly detection, sampling of data, data scaling, variable encoding, dealing with missing values, and so on.  Statistics is also essential to evaluate the model that has been used, i.e. see how well the machine learning model performs on a test dataset, or on data that it has never seen before.  Statistics is essential in selecting the final and appropriate model to deal with that specific data in a predictive modelling situation.  It is also needed to show how well the model has performed, by taking various metrics and showing how the model has fared.  Metrics used in Statistics Most of the data can be fit to a common pattern that is known as Gaussian distribution or normal distribution. It is a bell-shaped curve that can be used to summarize the data with the below mentioned two parameters:  Mean: It is understood as the central most value when the data points are arranged in a descending or ascending order, or the most likely value.Mode: It can be understood as the data point that occurs the greatest number of times, i.e. The frequency of the value in the dataset would be very high.  Median: It is a measure of central tendency of the data set. It is the middle number, that can be found by sorting all the data points in a dataset and picking the middle-most element. If the number of data points in a dataset is odd, one single middle value is picked up, whereas two middle values are picked and their mean is calculated if the number of data points in a dataset is even. Range: It refers to the value that is calculated by finding the difference between the largest and the smallest value in a dataset. Quartile: As the name suggests, quartiles are values that divide the data points in a dataset into quarters. It is calculated by sorting the elements in order and then dividing the dataset into 4 equal parts. Three quartiles are identified: The first quartile that is the 25th percentile, the second quartile which is the 50th percentile and the third quartile that is the 75th percentile. Each of these quartiles tells about the percentage of data that is smaller or larger in comparison to other percentiles of data. Example: 25th percentile suggests that 25 percent of the data set is smaller than the remaining 75 percent of the data set. Quartile helps understand how the data is distributed around the median (which is the 50th percentile/second quartile). There are other distributions as well, and it depends on the type of data we have and the insights we need from that data, but Gaussian is considered as one of the basic distributions. Variance: The average of the difference between every value and the mean of that specific distribution.  Standard deviation: It can be understood as the measure that indicates the dispersion that occurs in the data points of the input data.  Conclusion In this post, we understood why and how statistics is important to understand and work with data science. We saw a few terminologies of statistics that are essential in understanding the insights which statistics would end up giving to data scientist. We also saw a few basic algorithms that every data scientist needs to know, in order to learn other advanced algorithms.  
5433
Role of Statistics in Data Science

Takeaways from this article In this article, we u... Read More