Search

How to Install Python on Mac

This article will help you in the installation of Python 3  on macOS. You will learn the basics of configuring the environment to get started with Python.Brief introduction to PythonPython is an Interpreted programming language that is very popular these days due to its easy learning curve and simple syntax. Python finds use in many applications and for programming the backend code of websites. It is also very popular for data analysis across industries ranging from medical/scientific research purposes to retail, finances, entertainment, media and so on.When writing a python program or program in any other language, people usually use something called an IDE or Integrated Development Environment that includes everything you need to write a program. It has an inbuilt text editor to write the program and a debugger to debug the programs as well. PyCharm is a well-known IDE for writing python programs.Latest version of pythonThe latest version of python is python3 and the latest release is python3.9.0.Installation linksFor downloading python and the documentation for MacOS, visit the official website and go to the downloads section, from where you can download the latest python version for MacOS.Key terms (pip, virtual environment, path etc.)pip:pip is a package manager to simplify the installation of python packages. To install pip, run the below command on the terminal:curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py.If you install python by using brew which is a package management to simplify installation of software on macOs, it installs other dependent  packages as well along with python3  like pip etc.virtual environment:The purpose of virtual environments is to have a separate space where you can install packages which are specific to a certain project. For example if you have a lot of flask or Django-based applications and not all the applications are using the same version, we use virtual env wherein each project will have its own version.In order to use a virtual environment you need to be on the python 3.x version. Let’s understand how to create the virtual environment. You do not need any library as it comes along with standard python installation.So to create a new virtual env, run the below command:python3 -m venv demo-m expects a module name which is venv in this case, so with this python searches your sys.path and executes the module as the main module.Venv expects an environment name that you must create.Now you should have a new environment called demo. Let’s activate this virtual env by running the below command:source demo/bin/activateAfter running this, the environment is activated and you can see the environment name in the terminal. Another way to check if the env is activated is by running which python. You will see the python that you are using with this project env, and the version that it will use is the same that you used to create the environment.Getting and Installing MacPython:For MacOS, python usually comes pre-installed, so to check if python is installed open the terminal in the mac and use `python --version` to confirm the same. You can also see what is the default python version installed, which is usually python2.x by default. However, Python2.x is going to get deprecated soon, and with everyone moving to python3.x ,we will go with the latest python3 installation.Installation stepsFor downloading python, visit the official website and go to the downloads section. You can download the latest python version for MacOS as shown below:It will download a pkg file. Click on that file to start the installation wizard. You can continue with default settings. If you want to change the install location, however,  you can change it, and then continue and finish the installation with the rest of the default settings.Once the installation is finished it would have created the python 3.x directory in the application folder. Just open the application folder and verify the same.Now you have python3.x installed.To verify it from the terminal, go to the terminal and check the version of python by using `python --version` command. So you will still see it is showing the old default python version, Now instead if you use python3 explicitly like `python3 –version, you can see the version that you have installed with python3 version.Once the installation is finished it would have created a python3.x directory in the application folder. Open the application folder and verify the same.You can also install python3 on mac by using brew which is a package management to simplify installation of software on macOs.brew install python3brew will install other dependent  packages as well along with python3  like pip etcSetting pathSuppose you have installed a new python 3  version but when you type python it still shows the default python2 version which comes by default in mac os. To solve this, add an alias by runningalias python=python3Add this line in the file called .bash_profile present in home directory. In case this file is not present, you can create it, save the changes and restart the terminal by closing it. Next, open the terminal and run python and hit enter. You should see the latest python3 that you have installed.Sometimes when you type python or python3 explicitly, it does not work even if you have installed the python. You get the message, “command is not found”. This means the command is not present in the directories used by the machine for lookup. Let’s  check the directories where the machine is looking for commands by runningecho $PATHIt will list all your directories where the machine looks for commands. This will vary from machine to machine. If the command that you are trying is not under the directory path listed by echo, that command will not work. It will throw an error saying command is not present, until you provide the full path of the directory where it's installed.Now let’s open the file  .bash_profile and add the directory path where python is installed to the current path env variableFor example  let’s add the following lines in that bash_profile file which will add the below directory to the current env variable. This can vary from machine to machine based on the installed location.PATH=”/Library/Frameworks/Python.framework/Versions/3.7/bin:${PATH}”export PATHSave the changes and restart the terminal. Open the terminal now and run echo $PATH again and see the above path that you added for python3. When you now type python3 command, you should see it working.  Also, if you are trying to import a package that you have installed and it says that it cannot find that package, this means pip install is installing the packages in the different version of python directory. Make sure the location of the package is in the site-packages directory of the version of the python that you are using. You can see the location of the package that you are trying to import by running  pip show <packagename>The above command will have a location field in which you can see and cross verify the path.9. How to run python codeTo run python code just run the commandpython <pythonfile.py>Installing Additional Python Packages:If you want to see what all packages are installed in the env, run the command pip3 list which will list down the current packages installed. Let’s say you want to install request library. You can just install it by running pip3 install requests. Now try running pip3 list again, to see this requests lib installed in this env.Directory as package for distribution:Inside the python project or directory you should have a file called __init__.py. You can create this file by a simple touch command, and this file does not need to have any data inside it, All it has to do is to exist inside the directory, for that to work as a package.Documentation links for pythonhttps://www.python.org/doc/ConclusionThis article will help you with stepwise instructions on the installation of python on mac.

How to Install Python on Mac

4K
How to Install Python on Mac

This article will help you in the installation of Python 3  on macOS. You will learn the basics of configuring the environment to get started with Python.

Brief introduction to Python

Python is an Interpreted programming language that is very popular these days due to its easy learning curve and simple syntax. Python finds use in many applications and for programming the backend code of websites. It is also very popular for data analysis across industries ranging from medical/scientific research purposes to retail, finances, entertainment, media and so on.

When writing a python program or program in any other language, people usually use something called an IDE or Integrated Development Environment that includes everything you need to write a program. It has an inbuilt text editor to write the program and a debugger to debug the programs as well. PyCharm is a well-known IDE for writing python programs.

Latest version of python

The latest version of python is python3 and the latest release is python3.9.0.

Installation links

For downloading python and the documentation for MacOS, visit the official website and go to the downloads section, from where you can download the latest python version for MacOS.

How to Install Python on Mac

Key terms (pip, virtual environment, path etc.)

pip:

pip is a package manager to simplify the installation of python packages. To install pip, run the below command on the terminal:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py.
If you install python by using brew which is a package management to simplify installation of software on macOs, it installs other dependent  packages as well along with python3  like pip etc.

virtual environment:

The purpose of virtual environments is to have a separate space where you can install packages which are specific to a certain project. For example if you have a lot of flask or Django-based applications and not all the applications are using the same version, we use virtual env wherein each project will have its own version.

In order to use a virtual environment you need to be on the python 3.x version. Let’s understand how to create the virtual environment. You do not need any library as it comes along with standard python installation.

So to create a new virtual env, run the below command:

python3 -m venv demo

-m expects a module name which is venv in this case, so with this python searches your sys.path and executes the module as the main module.

Venv expects an environment name that you must create.

Now you should have a new environment called demo. Let’s activate this virtual env by running the below command:

source demo/bin/activate

After running this, the environment is activated and you can see the environment name in the terminal. Another way to check if the env is activated is by running which python. You will see the python that you are using with this project env, and the version that it will use is the same that you used to create the environment.

Getting and Installing MacPython:

For MacOS, python usually comes pre-installed, so to check if python is installed open the terminal in the mac and use `python --version` to confirm the same. You can also see what is the default python version installed, which is usually python2.x by default. However, Python2.x is going to get deprecated soon, and with everyone moving to python3.x ,we will go with the latest python3 installation.

Installation steps

For downloading python, visit the official website and go to the downloads section. You can download the latest python version for MacOS as shown below:

How to Install Python on Mac

It will download a pkg file. Click on that file to start the installation wizard. You can continue with default settings. If you want to change the install location, however,  you can change it, and then continue and finish the installation with the rest of the default settings.

Once the installation is finished it would have created the python 3.x directory in the application folder. Just open the application folder and verify the same.

Now you have python3.x installed.

To verify it from the terminal, go to the terminal and check the version of python by using `python --version` command. So you will still see it is showing the old default python version, Now instead if you use python3 explicitly like `python3 –version, you can see the version that you have installed with python3 version.

Once the installation is finished it would have created a python3.x directory in the application folder. Open the application folder and verify the same.

You can also install python3 on mac by using brew which is a package management to simplify installation of software on macOs.

brew install python3

brew will install other dependent  packages as well along with python3  like pip etc

Setting path

Suppose you have installed a new python 3  version but when you type python it still shows the default python2 version which comes by default in mac os. To solve this, add an alias by running

alias python=python3

Add this line in the file called .bash_profile present in home directory. In case this file is not present, you can create it, save the changes and restart the terminal by closing it. Next, open the terminal and run python and hit enter. You should see the latest python3 that you have installed.

Sometimes when you type python or python3 explicitly, it does not work even if you have installed the python. You get the message, “command is not found”. This means the command is not present in the directories used by the machine for lookup. Let’s  check the directories where the machine is looking for commands by running

echo $PATH

It will list all your directories where the machine looks for commands. This will vary from machine to machine. If the command that you are trying is not under the directory path listed by echo, that command will not work. It will throw an error saying command is not present, until you provide the full path of the directory where it's installed.

Now let’s open the file  .bash_profile and add the directory path where python is installed to the current path env variable

For example  let’s add the following lines in that bash_profile file which will add the below directory to the current env variable. This can vary from machine to machine based on the installed location.

PATH=”/Library/Frameworks/Python.framework/Versions/3.7/bin:${PATH}”
export PATH

Save the changes and restart the terminal. Open the terminal now and run echo $PATH again and see the above path that you added for python3. When you now type python3 command, you should see it working.  

Also, if you are trying to import a package that you have installed and it says that it cannot find that package, this means pip install is installing the packages in the different version of python directory. Make sure the location of the package is in the site-packages directory of the version of the python that you are using. You can see the location of the package that you are trying to import by running  

pip show <packagename>

The above command will have a location field in which you can see and cross verify the path.

9. How to run python code

To run python code just run the command
python <pythonfile.py>

Installing Additional Python Packages:

If you want to see what all packages are installed in the env, run the command pip3 list which will list down the current packages installed. Let’s say you want to install request library. You can just install it by running pip3 install requests. Now try running pip3 list again, to see this requests lib installed in this env.

Directory as package for distribution:

Inside the python project or directory you should have a file called __init__.py. You can create this file by a simple touch command, and this file does not need to have any data inside it, All it has to do is to exist inside the directory, for that to work as a package.

Documentation links for python

https://www.python.org/doc/

Conclusion

This article will help you with stepwise instructions on the installation of python on mac.

KnowledgeHut

KnowledgeHut

Author

KnowledgeHut is an outcome-focused global ed-tech company. We help organizations and professionals unlock excellence through skills development. We offer training solutions under the people and process, data science, full-stack development, cybersecurity, future technologies and digital transformation verticals.
Website : https://www.knowledgehut.com

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

How To Switch To Data Science From Your Current Career Path?

WHAT DO DATA SCIENTISTS DO?A data scientist needs to be well-versed with all aspects of a project and needs to have an in-depth knowledge of what’s happening. A data scientist’s job needs loads of exploratory data research and analysis on a daily basis with the help of various tools like Python, SQL, R, and Matlab. The life of a data scientist involves getting neck-deep into huge datasets, analysing them, processing them, learning new aspects and making novel discoveries from a business perspective.This role is an amalgamation of art and science that requires a good amount of prototyping, programming and mocking up of data to obtain novel outcomes. Once they get desired outcomes, data scientists move forward for production deployment where the customers can actually experience them. Every day, a data scientist is required to come up with new ideas, iterate them on already built products and develop something better.WHY SHOULD YOU GET INTO DATA SCIENCE?One of the most in-demand industries of the modern world is Data Science. Year on year, the increase in the total data generated by customers is huge, and has now almost touched 2.5 quintillion bytes per day. You can imagine how large that is! For any organization, customer data is of the utmost priority as with its help, they can sell their customer the products they want, by creating the advertisements they would be attracted to, providing the offers they won't reject, and in short delighting their customers every step of the way.The money factor has already been mentioned by me earlier. A Data Scientist earns about 25% more than a computer programmer. A person with a die-hard passion to work on large datasets and to draw meaningful insights can definitely begin their journey in becoming a great data scientist. WHAT ALL DO YOU NEED TO KNOW AND UNDERSTAND TO BECOME A DATA SCIENTIST?Data science skill sets are in a continuous state of fluctuation. Many people are confused with the thought that if they can gain expertise in 2 - 3 software technologies, they are well equipped to begin a career in data science and some also think that if they just learn machine learning, they can become a good data scientist. It is an undeniable fact that all these things together can make you a good data scientist but having only these skills will definitely not make you one. A good data scientist is a big data wrangler, who has the ability to apply quantitative analysis, statistics, programming and business acumen to help an enterprise grow. Solving just a data analysis problem or creating a machine learning algorithm will not make you a great enterprise data scientist. An expert in programming and machine learning who is not able to glean valuable insights to help the growth of an organization cannot be called a real Data Scientist. Data scientists work very closely with different business stakeholders to analyse where and what kind of data can actually add value to the real-world business applications. Data scientists should be able to discern the impacts of solving a data analysis problem such as what is the criticality of the problem, identifying the logical flaws in the analysis outcomes and must always ponder on the question- Does the outcome of the analysis make any sense to the business?Now the next question that arises is HOW TO GET INTO DATA SCIENCE FROM YOUR CURRENT CAREER PATH? The first and the foremost step is to understand the urgent need to change your path to Data Science, because if you have doubts in your mind then it would be hard to succeed. This does not mean that you need to quit your job, sit at home and wait for some company to hire you as a data scientist. It means that you need to understand your priority and have to work and develop the required skills to hone your knowledge in that field, so as to excel in the career path you tend to follow next.A data scientist must be able to navigate through multifaceted data issues and various statistical models, keeping the business perspective in mind. Translation of the business requirements into datasets and machine learning algorithms to obtain value from the data, are the core responsibilities of a Data Scientist. Moreover, communication plays a pivotal role in data science as well because through the entire data science process, a data scientist must be able to closely communicate with the business partners. Data scientists should work in collaboration with top level executives in the organization like marketing managers, product development managers, etc. to figure out how to support each of the departments in the company to grow with their respective data driven analysis. Data Science requires three main skills :-Statistics: To enter the field of data science, a solid foundation in statistics is a must. Professionals must be well-equipped with statistical techniques, and should know when and how to apply them to a data-driven decision-making problem.     Data Visualisation: Data visualization is the heart of the data science ecosystem as it assists to present the solution and outcome to a data driven decision making problem in a better format to the clients who do not belong to data analytics background. Data visualization in data science is challenging as it requires finding answers to complex questions. Before stepping into this field, a lot of preparation in visualization needs to be done. Programming: People often ask themselves “Do I need to be a BIG time coder or an expert programmer to pursue a lucrative career in Data Science?” The answer to this is probably no. Expertise in programming skills can be an added advantage in Data Science, but it is not compulsory. Programming skills are not needed in big data applications but are rather needed to solve a data equation that is time consuming when solved manually. If a data scientist can figure out what needs to be done with the dataset, that would be enough.WHAT IS DATA IN DATA SCIENCE?Data is the essence of Data Science. Data Science revolves around big datasets but many a times, data is not of the quality that is required to take decisions. Before being ready for processing, data goes through pre-processing which is a necessary group of operations that translate raw data into a more understandable format and thus, useful for further processing. Common processes are:Collect raw data and store it on a server. This is untouched data that scientists cannot analyze straight away. This data may come from surveys, or through popular automatic data collection methods, like using cookies on a website.Class-label the observationsThis consists of arranging the data by categorizing or labelling data points to the appropriate data type such as numerical, or categorical data.Data cleansing / Data scrubbingDealing with incongruous data, like misspelled categories or missing values.Data balancingIf the data is unbalanced, for instance if the categories contain unequal numbers of observations and are not representative, applying certain data balancing methods, like extracting equal numbers of observations for the individual categories, and then processing it, fixes the issue.Data shufflingRe-arranging the data points to remove the unwanted patterns and improve predictive performance is the major task here. An example would be, if the first 1000 observations in the dataset are from the first 1000 people who have used a website; the data is not randomized due to different sampling methods used.The gist of the requirements for a Data Scientist are:Hands on with SQL is a must. It is a big challenge to understand the dicing and slicing of data without expert knowledge of various SQL concepts.Revisit Algebra and MatricesDevelop expertise in statistical learning and implement them in R or Python based on the kind of dataset.Ability to understand and implement Big Data, as the better the data, the more is the accuracy of a machine learning algorithm.Data visualization is the key to master data science as it provides the summary of the solution.WHERE SHOULD YOU LEARN DATA SCIENCE FROM?There are many institutions which offer in-depth courses on data science. You can also undertake various online courses to equip yourself with Data Science skills. As the Data Science market is growing exponentially, more professionals are leaning toward a career in this rewarding space.  To explore some course options in data science, you can visit.
6521
How To Switch To Data Science From Your Current Ca...

WHAT DO DATA SCIENTISTS DO?A data scientist needs ... Read More

What is factor analysis in data science?

Factor analysis is a part of the general linear model (GLM). It is a method in which large amounts of data are collected and reduced in size to a smaller dataset. This reduction in the size of the dataset ensures that the data is manageable and easily understood by people.  In addition to manageability and interpretability, it helps extract patterns in data as well as show the characteristics that are commonly seen in the different patterns (that are extracted). It helps create a  variable set for data points in the datasets that are similar. This similar set of data is also known as dimensions.  AssumptionAn assumption while dealing with factor analysis is that, in a collection of the variables observed, there is a set of underlying variables, which is known as ‘factor’. This factor helps explain the inter-relationship between these variables.  There should be a linear relationship between the variables in the data.  There should be no multicollinearity between variables in the data.  There should be true correlation between the variables and factors in the data.  There are multiple methods to extract factors from data, but principal component analysis is one of the most frequently used methods. In Principal component analysis (PCA), maximum variance is extracted and placed in the first factor. Once this is done, the variance explained by the first set of factors is eliminated and then maximum variance is again extracted for the second factor. This goes on until the last factor in the variable set.  Types of factor analysisThe word ‘factor’ in factor analysis refers to the variable set which has similar patterns. They are sometimes associated with a hidden variable, which is also known as confounding variable. This hidden variable is not measured directly. The ‘factors’ talk about the variation in data which can be explained.  There are two types of factors:  Exploratory;Confirmatory Exploratory factor analysisThis deals with data that is unstructured or when the person/s dealing with the data are clueless about the structure of the data and the dimensions of the variable associated with the data. Exploratory factor analysis gives information about the optimum number of factors which may be required to represent the data. If a researcher wishes to explore patterns, it is suggested to use exploratory factor analysis.Confirmatory factor analysisThis kind of analysis is used to verify the structure of the data, given the condition that the people dealing with the data are aware of its structure and dimensions of the variable associated with the data. This kind of analysis helps specify the number of factors required to perform the analysis.Factor analysis is a multivariate method- this means it deals with multiple variables associated with data. This is a data reduction technique wherein the basic idea is to use a smaller set of variables, which is known as ‘factors’, that is a representation of a bigger set of variables.It helps the researcher in understanding whether the relationship between the observed variables (aka manifest variables) and their underlying construct exists or not.If a researcher wishes to perform hypothesis testing, it is suggested to use exploratory factor analysis.What are factors?Factors can be understood as a construct which can’t be measured with the help of a single variable. Factor analysis is generally used with interval data, but it can be used for ordinal data as well.  What is ordinal data?Ordinal data is statistical data in which variables exist in naturally occurring categories that are in a particular order. The distance between categories in ordinal data can’t be found using ordinal data itself.For a dataset to be ordinal data, it needs to fulfil a few conditions.  Multiple terms in the dataset are in an ordered fashion.  The difference between variables in the dataset is not homogeneous/uniform.  A group of ordinal numbers indicates ordinal data, and a group of ordinal data can be represented using an ordinal scale.Likert Scale is one type of ordinal data. Let us understand Likert scale with the help of an example:Suppose we have a question that says “Please indicate how satisfied you are with this product purchase”. A Likert scale may have numbers between 0/1 to 5 or 0/1 to 10. On this scale, 0/1 indicates a lesser value and 5 or 10 indicates a higher value.Let us understand ordinal data with the help of another example. If we have variables stored in a specific order, say “low, medium, high” or “not happy, slightly happy, happy, very happy, extremely happy”, it is considered as ordinal data.Conditions for variables in factor analysisThese variables (in factor analysis) need to be linearly associated with each other. Linear relationship or association describes a relationship that forms a straight line when two variables are plotted on a graph. It can also be represented as a mathematical equation in the form ‘y = mx + b’.This linear associativity can be checked by plotting scatterplots of the pairs of variables. This indicates that the variables need to be moderately correlated to each other.If the variables are not correlated, the number of factors will be the same as the number of original variables. This means that performing factor analysis on this kind of variables would be useless.How can factor analysis be performed?Factor analysis is a complex mathematical procedure. It can be performed with the help of software applications. Before performing the analysis, it is essential to check if the data is relevant. This can be done with the help of Kaiser-Meyer-Olkin test.Kaiser-Meyer-Olkin testThis is also known as the KMO test, which is used to see how well the data is suited to perform factor analysis. It measures the sampling adequacy for every variable in the model.This statistic measures the proportion of variance among all the variables in the data. The lower the proportion, more suited the data is to perform factor analysis.KMO returns values between 0 and 1.If KMO value lies between 0.8 and 1, it means that the sampling is adequate.If KMO value is less than 0.6 or lies between 0.5 and 0.6, it means that the sampling is not adequate. This means proper actions need to be taken.If KMO value is closer to 0, this indicates that the data contains large number of partial correlations in comparison to the sum of correlations. This is not suited for factor analysis. Values between 0 and 0.49 are considered unacceptable. Values between 0.50 and 0.59 are considered not good. Values between 0.60 and 0.69 are considered mediocre.  Values between 0.70 and 0.79 are considered to be good.  Values between 0.80 and 0.89 are considered to be great. Values between 0.90 and 1.00 are considered to be absolutely fantastic. The formula to perform KMO test is:Here, R =  which is the correlation matrix; and U =  which is the partial covariance matrix.Once the relevant data has been collected, factor analysis can be performed in a variety of ways.Using StataIt can be performed in Stata with the help of postestimation command- ‘estat kmo’.Using RIt can be performed in R using the command ‘KMO(r)’ where ‘r’ refers to the correlation matrix that needs to be analysed.Using SPSSSPSS is a statistical platform that can be used to run factor analysis. First go to Analyze -> Dimension Reduction -> Factor, and check the “KMO and Bartlett’s test of sphericity” box.If the measure of sampling adequacy (MSA) for single variable is needed, the ‘”anti-image” box needs to be checked. An ‘anti-image’ box shows the MSAs listed in diagonals of matrix.The test can also be executed by specifying KMO in the Factor Analysis command. The KMO statistic is found in the “KMO and Bartlett’s Test” table in the Factor output.ConclusionIn short, Factor Analysis brings in simplicity after reducing variables. Factor Analysis, including Principal Component Analysis, is also often used along with segmentation studies. In this post, we understood about the factor analysis method, and the assumptions made before working on the method. We also saw different kinds of factor analysis, and how they can be performed on different platforms.
7350
What is factor analysis in data science?

Factor analysis is a part of the general linear m... Read More

Combining Models – Python Machine Learning

Machine Learning is emerging as the latest technology these days, and is solving many problems that are impossible for humans. This technology has extended its wings into diverse industries like Automobile, Manufacturing, IT services, Healthcare, Robotics and so on. The main reason behind using this technology is that it provides more accurate solutions for problems, simplifies tasks and eases work processes. It automates the world with its applications that are helpful for many organizations and for the well-being of people. This technology uses the input data to develop a model, and further predicts the outcomes to know the performance of the model.Generally, we develop machine learning models to solve a problem by using the given input data. When we work on a single algorithm, we are unable to distinguish the performance of the model for that particular statement, as there is nothing to compare it against. So, we feed the input data to other machine learning algorithms and then compare them with each other to know which is the best algorithm that suits the given problem. Every algorithm has its own mathematical computation and significance to deal with a specific problem to bring out the best results at the end.Why do we combine models?While dealing with a specific problem with a machine learning algorithm we sometimes fail, because of the poor performance of the model. The algorithm that we have used may be well suited to the model, but we still fail in getting better outcomes at the end. In this situation, we might have many questions in our mind. How can we bring out better results from the model? What are the steps to be taken further in the model development? What are the hidden techniques that can help to develop an efficient model?To overcome this situation there is a procedure called “Combining Models”, where we mix one or two weaker machine learning models to solve a problem and get better outcomes. In machine learning, the combining of models is done by using two approaches namely “Ensemble Models” & “Hybrid Models”. Ensemble Models use multiple machine learning algorithms to bring out better predictive results, as compared to using a single algorithm. There are different approaches in Ensemble models to perform a particular task. There is another model called Hybrid model that is flexible and helps to create a more innovative model than an Ensemble model. While combining models we need to check how strong or weak a particular machine learning model is, to deal with a particular problem.What are Ensemble Methods?An Ensemble is made up of things that are grouped together, that take up a particular task. This method combines several algorithms together to bring out better predictive results, as compared to using a single algorithm. The objective behind the usage of an Ensemble method is that it decreases variance, bias and improves predictions in a developed model. Technically speaking, it helps in avoiding overfitting.The models that contribute to an Ensemble are referred to as the Ensemble Members, which may be of the same type or different types, and may or may not be trained on the same training data.  In the late 2000s, adoption of ensembles picked up due in part to their huge success in machine learning competitions, such as the Netflix Prize and other competitions on Kaggle.  These ensemble methods greatly increase the computational cost and complexity of the model. This increase comes from the expertise and time required to train and maintain multiple models rather than a single model.  Ensemble models are preferred because of two main reasons; namely Performance & Robustness. The ensemble methods majorly focus on improving the accuracy of the model by reducing variance component of the prediction error and by adding bias to the model.Performance helps a Machine Learning model to make better predictions. Robustness reduces the spread or dispersion of the prediction and model performance.The goal of a supervised machine learning algorithm is to have “low bias and low variance”. The Bias and the Variance are inversely proportional to each other i.e., if the bias is low then the variance is high, else the bias is high then the variance is low.We explicitly use ensemble methods to seek better predictive performance, such as lower error on regression or higher accuracy for classification. They are also further used in Computer vision and are given utmost importance in academic competitions also.Decision TreesThis type of algorithm is commonly used in decision analysis and operation Research, and it is one of the mostly used algorithms in the context of Machine Learning.The decision tree algorithm aims to produce better results for small and large amounts of data, which are taken as input data and fed to the model. These   algorithms are majorly used in decision making problem statements.The decision tree algorithm is a tree like structure consisting of nodes at each stage. The top of the tree is the Root Node which describes the main problem that we deal with, and there are Sub Nodes which act as classes or labels for the data given in the dataset. The Leaf Node is the last layer of the decision tree, representing the outcomes or values of the problem.The tree structure is extended with a number of nodes till a perfect prediction is made from the given data using the model. Decision tree algorithms are used in classification as well as regression problems. This algorithm is widely used in machine learning to solve problems, and the main advantage of this model is that we can have 2 or more outputs, from which we can select the most suitable one for the given problem.These can operate on both small and large amounts of data. Decisions taken using this algorithm are often fast and accurate. In machine learning the different types of Decision Tree algorithms includeClassification and Regression Tree (CART)Decision stumpChi-squared automatic interaction detection (CHAID)Types of Ensemble MethodsEnsemble methods are used to improve the accuracy of the model by reducing the bias and variance. These methods are widely used in dealing with Classification and Regression Problems. In ensemble method, several models combine together to form one reliable model that results in improving accuracy at the end.Ensemble methods are widely classified into the following types to exhibit better performance of the model. They are:BaggingBoostingStackingThese ensemble methods are broadly classified into four categories, namely “Sequential methods”, “Parallel methods”, “Homogeneous Ensemble” and “Heterogeneous Ensemble”. They help us to differentiate the performance and accuracy of models for a problem.Sequential methods generate sequential base learners who are data dependent. Here the new data we take as input to the model is dependent on the previous data, and the data which is mislabeled previously by the model is tuned with weights to get better accuracies at the end. This technique is possible in “BOOSTING”, for example in Adaptive Boosting (AdaBoost).Parallel methods generate parallel order base learners in which the data is independent. This independence of the base learners on the data significantly reduces the error with the application of averages. This technique is possible in “STACKING”, for example in Random Forest.A Homogenous ensemble is a combination of the same type of classifiers. Even though the dataset consists of different classifiers, this ensemble technique makes a model that best suits a given problem. This type of technique is computationally expensive and is suitable for solving large datasets. “BAGGING” & “BOOSTING” are the popular methods that exhibit homogeneous ensemble.Heterogeneous ensemble is a combination of different types of classifiers, in which each classifier is based on the same data. This method works on small datasets. “STACKING” comes in this category.BaggingBagging is a short form of Bootstrap Aggregating, used to improve the accuracy of the model. It is used when dealing with problems related to Classification and Regression. This technique improves the accuracy of the model by reducing variance, and helps to prevent the overfitting of the model. Bagging can be applied with any type of method in machine learning, but generally it is implemented using Decision Trees.Bagging is an ensemble technique, in which several models are grouped together to make one single reliable model to improve the accuracy. In the technique of bagging, we fit several independent models together and average their predictions to get a model that results in low variance and high accuracy to the model.Bootstrapping is a sampling technique, where we obtain the data in the form of samples. The samples are derived from the whole population with the help of replacement procedure. The sampling technique with the help of replacement method helps the learners to make the selection procedure randomized. Now the base learning algorithm is run across the samples to complete the procedure for better results.Aggregation is a technique in bagging that helps to incorporate all the possible outcomes of the predictions and randomizes the outcomes at the end. Without the usage of aggregation, the predictions will not be that accurate, because all the outcomes that are obtained at the end of the model are not taken into consideration. Thus, the aggregation is used based on the probability bootstrapping procedures or on the basis of all outcomes of the predictive models.Bagging is an advantageous procedure in Machine Learning, as it combines all the weak base learners that come together to form a single strong learner which is more stable. This technique reduces variance, thereby increasing the accuracy to the model. It prevents overfitting of the model. The limitation for bagging is that it is computationally expensive. When the proper procedure for bagging is established, we should not ignore bias as it fails in obtaining better results at the end.Random Forest ModelsIt is a supervised machine learning algorithm, which is flexible and widely used because of its simplicity and diversity. It produces great results without hyper-parameter tuning.In the term “Random Forest”, the “Forest” refers to a group of decision trees or an ensemble of decision trees, usually trained with the method of “Bagging”. We know that the method of bagging is the combination of learning models that increases the overall result.Random forest is used for classification and regression problems. It builds many decision trees and combines them together to get a more accurate and stable prediction at the end of the model.Random forest adds additional randomness to the model, while growing the trees. Instead of finding the most important feature at the time of splitting a node, the random forest model searches for the best feature among a random subset of features. Thus in random forest, only a random subset of features is taken into consideration by the algorithm for node splitting.Random forest has the quality of measuring the relative importance of each feature on the prediction. In order to use the random forest algorithm, we import a tool “Sklearn”, which measures features importance by looking at the amount of tree nodes used to reduce the impurity across all the trees in the forest.The benefits of using random forest include the following:The training time is less compared to other algorithms.Runs efficiently on a large dataset, and predicts output with high accuracy.When a large proportion of data is missing it also maintains accuracy.It is flexible to apply and outcomes are obtained easily.BoostingBoosting is an ensemble technique, which converts the weak machine learning models into strong models. The main goal of this technique is to reduce bias and variance of a model to improve accuracy. This technique learns from the previous predictor mistakes of data to make better predictions in future by improving the performance of the model.It is a stack like structure in which the weak learners are placed at the bottom and the strong learners are placed at the top. In the stack, the learners at the upper layers initially learn from the weak learners by applying some sort of modifications to the previous techniques.It exists in many forms, that includes XGBoost (Extreme Gradient Boosting), Gradient Boosting, Adaptive Boosting (AdaBoost).AdaBoost makes use of weak learners that are in the form of decision trees, which includes one split normally known as decision stumps. The main decision stumps of Adaboost comprises of observations carrying similar weights.Gradient Boosting follows the sequential addition of predictors to an ensemble, each correcting the previous one. Without changing the weights of incorrect classified observations like Adaboost, this Gradient boosting technique places a new predictor based on the residual errors made by the previous predictors in the generated model.XGBoost is called as Extreme Gradient Boosting. It is designed in order to show better speed and performance of the machine learning model, that we developed. XGBoost technique is an implementation of Gradient Boosted Decision Trees. Generally, normal boosting techniques are very slow as they are in sequential form of training, so XGBoost technique is widely used to have good computational speed and to show better model performance.Simple Averaging / Weighted MethodIt is a technique to improve the accuracy of the model, mainly used for regression problems. It is based on the weights of the model multiplied with the actual instance values in the given problem. This method produces some consistent results that are reliable and help to get a better understanding about the outcomes of the given problem.In the case of a simple averaging method, average predictions are calculated for every instance of the test dataset. It can reduce the overfitting of the model, and is mainly suitable for regression problems as it consists of numerical data. It creates a smoother regression model at the end by reducing the effect of overfitting. The technique of simple averaging is like calculating the mean of the given values.The weighted averaging method is a slight modification to the simple averaging method, in which the prediction values are multiplied with the weight factor and sum up all the multiplied values for every instance. We then calculate the average. We assume that the predicted values are in the range of 0 to 1.StackingThis method is a combination of multiple regression or classifier techniques with a meta-regressor or meta-classifier. Stacking is different from bagging and boosting. Bagging and boosting models work mainly on homogeneous weak learners and don’t consider heterogeneous learners, whereas stacking works mainly on heterogeneous weak learners, and consists of different algorithms altogether.The bagging and boosting techniques combine weak learners with the help of deterministic algorithms, whereas the stacking method combines the weak base learners with the help of a meta-model. As we defined earlier, when using stacking, we learn from several weak base learners and combine them together by training with a meta-model to predict the results that are predicted by the weak learners used in the model.Stacking results in a pile-like structure, in which the lower-level output is used as the input to the next layer. In the same way the stack increases from maximum error rate at the bottom to the minimum error rate area at the top. The top layer in the stack has good prediction accuracy compared to the lower levels. The aim of stacking is to produce a low bias model for accurate results for a given problem.BlendingIt is a technique similar to the stacking approach, but uses only the validation set from the training set of the model to make predictions. The validation set is also called a holdout set.The blending technique uses a holdout set to make predictions for the given problem. With the help of holdout set and the predictions, a model is built which will run across the test set. The process of blending is explained below:Train dataset is divided into training and validation setsThe model is fitted on to the training setPredictions are made on the validation set and the test setNow the validation set and the predictions are used as features to build a new modelThis developed model is used to make final predictions on the test set and on the meta-features.The stacking and blending techniques are useful to improve the performance of the machine learning models. They are used to minimize the errors to get good accuracy for the given problem.Voting Voting is the easiest ensemble method in machine learning. It is mainly used for classification purposes. In this technique, the first step is to create multiple classification models using a training dataset. When the voting is applied to regression problems, the prediction is made with the average of multiple other regression models.In the case of classification there are two types of voting,Hard Voting  Soft VotingThe Hard Voting ensemble involves summing up the votes for crisp class labels from other models and predicting the class with the most votes. Soft Voting ensemble involves summing up the predicted probabilities for class labels and predicting the class label with the largest sum probability.In short, for the Regression voting ensemble the predictions are the averages of contributing models, whereas for Classification voting ensemble, the predictions are the majority vote of contributing models.There are other forms of voting like “Majority Voting” and “Weighted Voting”. In the case of Majority Voting, the final output predictions are based on the number of votes it gets. If the count of votes is high, that model is taken into consideration. In some of the articles this method is also called as “Plurality Voting”.Unlike the technique of Majority voting, the weighted voting works based on the weights to increase the importance of one or more models. In the case of weighted voting, we count the prediction of the better models multiple times.ConclusionIn order to improve the performance of weak machine learning models, there is a technique called Ensembling to improve or boost the accuracy of the model. It is comprised of different techniques, helpful for solving different types of regression and classification problems.
6761
Combining Models – Python Machine Learning

Machine Learning is emerging as the latest technol... Read More