Search

A Guide to Using AI Responsibly

"The artificial intelligence (AI) that we develop is impacting people directly or indirectly. And often, they don’t deserve the consequences of those impacts, nor have they asked for it. It is incumbent upon us to make sure that we are doing the right thing”.- Dr. Anthony Franklin, Senior Data Scientist and AI Engineer, MicrosoftDigitally addressing a live global audience in a recent webinar on the topic of ‘Responsible AI’, Dr. Anthony Franklin, a senior data science expert and AI evangelist from Microsoft, spoke about the challenges that society faces from the ever-evolving AI and how the inherent biased nature of humans is reflected in technology.Drawing from his experience in machine learning, risk analytics, analytics model management in government as well as data warehouse, Dr. Franklin shed light on the critical need to incorporate ethics in developing AI. Citing examples from various incidents that have taken place around the world, Dr. Franklin emphasized why it is critical for us to have an uncompromising approach towards using AI responsibly. He talked about the human (over)indulgence in technology, the challenges that society faces from the ever-evolving AI and how the inherent biased nature of humans is reflected through technology.The purpose of the talk and this article is to help frame the debate on responsible AI with a set of principles we can anchor on, and a set of actions we can all take to advance the promise of AI in ways that don’t cause harm to people. In this article, we present key insights from the webinar along with the video for you to follow along.KnowledgeHut webinar on Responsible AI by Dr. Anthony Franklin, MicrosoftWhat is the debate about?These are times when we can expect to see policemen on the streets wearing AI glasses, viewing, and profiling the public. Military organizations today, can keep an eye on the public. Besides, a simple exercise of googling the word CEO, would result in pages and pages showing white men.These are just some of the examples of the unparalleled success we have achieved in technology coupled with the fact that the same technology has overlooked the basic ethics, moral and social.Responsible AI is a critical global needIn a recent study conducted from among the top ten technologically advanced nations, nearly nine of ten organizations across countries have encountered ethical issues resulting from the use of AI.Artificial intelligence has captured our imagination and made many things we would have thought impossible only a few years ago seem commonplace today. But AI has also raised some challenging issues for society writ large. We are in a race to advance AI capabilities and everything is about collecting data. But, what is being done with the data?Advancements in AI are different from other technologies because of the pace of innovation and its proximity to human intelligence – impacting us at a personal and societal level.While there remains no end to this ever-ending road of development, the need for us to ensure an equally powerful framework has increased even more. The need for a responsible AI is a critical global need.What developers are saying about ethics in AIStack Overflow carried out a couple of anonymous developer focused surveys in 2018. Some of the responses are a clear indication of how the machine is often so powerful. While we wish the answers were all "No", the actual answers are not too surprising.1. What would the developers do if asked to write a code for an unethical purpose?The majority (58.5 percent) stated they would clearly decline if they were to be approached to write code for an unethical purpose. Over a third (37 percent), however, said they would do if it met some specific criteria of theirs.2. Who is ultimately responsible for the code which accomplishes something unethical?When asked with whom the ultimate responsibility lies if their code were to be used to accomplish something unethical, nearly one fifth of the developers acknowledge that such a responsibility should lie with the developer who wrote the code. 23 percent of the developers stated that this accountability should lie with the person who came up with the idea. The majority (60 percent), however, felt that the senior management should be responsible for this.3. Do the developers have an obligation to consider the ethical implications?A significant majority (80 percent) acknowledged that developers have the obligation to consider ethical implications. Though in smaller numbers, the above studies show the ability of the developers to get involved in unethical activity and the tendency to brush off accountability. Thus, there is a great and growing need not just for developers, but also for the rest of us to work collectively to change these numbers.The six basic principles of AIThough ambiguous, the principles attached with the ethics of AI remain very much tangible. Following are the six basic principles of AI:1. FairnessFairness (noun)the state, condition, or quality of being fair, or free from bias or injustice; evenhandednessDiscriminationOne of the many services which Amazon provides today includes the same-day-shipping policy. The map below shows the reach of the policy in the top 6 metropolitans in the US.Source: Bloomberg   In the city of Boston, one can see the gaps, the places where the service is not provided. Coincidentally, these areas turned out to be areas inhabited by individuals belonging to the lower economic strata. In defence, the Amazon stated that the policy was meant primarily for regions with denser Amazon users. Whichever way this is seen, the approach still ends up being discriminatory.We see examples of bias in search as well. When we search for “CEO” in Bing, we see that all pictures are pictures of mostly white men, creating the impression that there are no women CEOs.RacismWe see examples of bias across different applications of AI. An image of an Asian American was submitted for the purpose of renewing the passport. After analysing the subject, the application’s statement read “Subjects eyes are closed”.This highlights the unintentional, but negatively impactful working of a data organization. It further goes on to show how an inherent bias held by humans, transcends into the technology we make.An algorithm widely used in US hospitals to allocate healthcare to patients has been systematically discriminating against black people, a sweeping analysis has found.The study, published in Science in October 2019, concluded that the algorithm was less likely to refer black people than white people who were equally sick, to programmes that aim to improve care for patients with complex medical needs. Hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the United States each year.As a result, millions of black people have not been able to get equal medical treatment. To make things worse, data suggests that in some way or the other, the algorithms have been set up to make money.In 2015, Google became one of the first to release a facial recognition programme. The system recognized the Caucasians perfectly well, but the same system identified a black person with an ape.These examples of bias in technologies are not isolated from the society we live in. The society we live in has different forms of biases that may not consistent with a corporation’s values, but these biases may already be prevalent in their data sets.With the widespread use of AI and statistical learning, such enterprises are at serious risk not only of spreading but also amplifying these biases in ways that they do not understand.These examples demonstrate gross unfairness on multiple fronts, making it necessary for organizations to have a more diverse data in general.2. Reliability and SafetyReliability (noun)the ability to be relied on or depended on, as for accuracy, honesty, or achievement.Safety (noun)the state of being safe; freedom from the occurrence or risk of injury, danger, or loss. the quality of averting or not causing injury, danger, or loss.In the case of an autonomous vehicle, when can we as a consumer be 100% sure of our safety? Or can we ever be? How many miles does a car have to cover or how many people are to lose their lives before the assurance of the rest?In the case of autonomous vehicles, how can we as consumers be 100 percent sure of our safety? Or can we ever be? How many miles does a car have to cover or how many people are to lose their lives before the assurance of the rest? These are just a few of the questions a company must answer before establishing themselves as a reliable organization.A project from scientists in the UK and India shows one possible use for automated surveillance technology to identify violent behavior in crowds with the help of camera-equipped drones.In a paper titled “Eye in the Sky,” the researchers used uses a simple Parrot AR quadcopter (which costs around $200) to transmit video footage over a mobile internet connection for real-time analysis. A figure from the paper showing how the software analyzes individuals poses and matches them to “violent” postures. The question is: how will this technology be used, and who will use it?Researchers working in this field often note there is a huge difference between staged tests and real-world use-cases. Though this system is yet to prove itself, it is a clear illustration of the direction contemporary research is going.Using AI to identify body poses is a common problem, with big tech companies like Facebook publishing significant research on the topic. Many experts agree that automated surveillance technologies are ripe for abuse by law enforcement and authoritarian governments.3. Privacy and securityPrivacy (noun)the state of being apart from other people or concealed from their view; solitude; seclusion:the state of being free from unwanted or undue intrusion or disturbance in one's private life or affairs; freedom to be let alone:Security (noun)freedom from danger, risk, etc.; safety.freedom from care, anxiety, or doubt; well-founded confidence.something that secures or makes safe; protection; defense.Strava’s heat map revealed military bases around the world and exposed soldiers to real danger – this is not AI per se, but useful for a data discussion. A similar instance took place in Russia, too.The iRobot’s latest Roomba’s i7+ Robovac maps users’ homes to let them customize the cleaning schedule. An integration with Google Assistant lets customers give verbal commands like, “OK Google, tell Roomba to clean the kitchen.” - this is voluntary action and needs user’s consent.In October 2018, the company admitted it had exposed the personal data of around 500,000 Google+ users, leading to the closure of the platform. It also announced it was reviewing access to Gmail by third-party companies after it was revealed that many developers were reading and analyzing users’ personal mail for marketing and data mining.A 2012 New York Times article, spoke about a father who found himself in the uncomfortable position of having to apologize to a Target employee. Earlier, he had stormed into a store near Minneapolis and complained to the manager that his daughter was receiving coupons for cribs and baby clothes in the mail. It turned out that Target knew his teen daughter better than he did. She was pregnant and Target knew this before her dad did.By crawling the teen’s data, statisticians at Target were able to identify about 25 products that, when analysed together, allowed them to assign each shopper a “pregnancy prediction” score. More importantly, they could also estimate her due date to within a small window, so they could send coupons timed to very specific stages of her pregnancy.There was another instance reported in Canada of a mall using facial recognition software in their directories June to track shoppers' ages and genders without telling them.4. InclusivenessInclusiveness (adjective)including or encompassing the stated limit or extremes in consideration or account (usually used postpositively)including a great deal, or encompassing everything concerned; comprehensiveIn the K.W vs Armstrong case, the plaintiffs were vulnerable adults living in Idaho, facing various psychological and developmental disabilities. They complained to the court when the Idaho Department of Health and Welfare reduced their medical assistance budget by a whopping 42%.The Idaho Department of Health and Welfare claimed that the reasons for the cuts were “trade secrets” and refused to disclose the algorithm it used to calculate the reductions.Once a system is found to be discriminatory or otherwise inaccurate, there is an additional challenge in redesigning the system. Ideally, government agencies should develop an inclusive redesign process that allows communities affected by algorithmic decision systems to meaningfully participate. But this approach is frequently met with resistance.5. TransparencyTransparency (adjective)having the property of transmitting rays of light through its substance so that bodies situated beyond or behind can be distinctly seen. admitting the passage of light through interstices. so sheer as to permit light to pass through; diaphanous. easily seen through, recognized, or detectedA company in New Orleans assisted the police officials to predict the individuals and their likelihood of committing crimes. This is the example of the usage of predictive analytics for policing strategies, carried out secretively.In the Rich Caruana case study, 10 million patients data, and 1000’s of features were used to train a model on the data to predict the risk of pneumonia and decide whether patients must be sent to hospital. But was this model safe to deploy and use on real patients? Was the test data sufficient to make accurate predictions?Unfortunately, a bunch of different machine learning models had been used to train an accurate black box, without knowing what was inside. Multitask neural net was thought to be the most accurate, but was the approach safe?The pattern in the data, strictly speaking, was accurate. The good news was that the treatment was so effective that it lowered the risk of dying compared to the general population. However, the bad news was that if we used this model to make decisions about whether to admit the patient to the hospital, it would be dangerous to asthmatics and hence, not at all safe to use.Not only is this an issue of safety, but also a case of violation of transparency. The key problem is that there are bad patterns we don’t know about. While neural net is more accurate and can learn things fast, one doesn’t know everything that the neural net is using. We really need to understand the model before we deploy it.Now, through a technique called Generalized Additive Models, whereby the influence of individual attributes in the training data can be independently measured, a new model has been trained where the outputs are completely transparent, but actually improved performance over the old model.Asthmatics were now being sent home sooner because they were rushed to the front of the line as soon as they arrived at the hospital. Faster and more targeted care led to better results. And all the model learned from were the results.In another instance, one of the tools used by the New Orleans Police Department to identify members of gangs like 3NG and the 39ers came from the Silicon Valley company Palantir. The company provided software to a secretive NOPD program that traced people’s ties to other gang members, outlined criminal histories, analyzed social media, and predicted the likelihood that individuals would commit violence or become a victim.As part of the discovery process in the trial, the government turned over more than 60,000 pages of documents detailing evidence gathered against him from confidential informants, ballistics, and other sources — but they made no mention of the NOPD’s partnership with Palantir.6. AccountabilityAccountability (adjective)subject to the obligation to report, explain, or justify something; responsible; answerable. capable of being explained; explicable; explainable.Like in the example of autonomous vehicles, in case of any mishap, where does the accountability lie? Who is to be blamed for the loss of lives or any sort of destruction in a driverless car?It appears that the more advanced the technology, the faster it is losing its accountability. Be it a driverless car crashing or a robot killing a person, the question remains: who is to blame.Whom does one sue if I were to get hit by a driverless car? What if a medical robot gives a patient the wrong drug? What if a vacuum robot sucks up one's hair while they are napping on the floor? And can a robot commit a war crime? Who gets to decide whether a person deserves certain treatment in an algorithm-based health care policy? Is it the organization which developed it or the developer who made it? There is a clear case of lack of accountability in such situations.The key word in the above-mentioned principles is impact. The consequence of any AI programming, intentional or unintentional, leaves a strong impact.The responsible AI lifecycleBoth the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) published ethics guidelines for computer scientists in the early 1990s. More recently, we have seen countless social scientists and STS researchers sounding the alarm about technology’s potential to harm people and society.To turn talk about responsible AI into action, organisations need to make sure that their use of AI fulfils several criteria. After defining the basic AI principles, an organization can develop a prototype. But they must be open to change even after launching what they assume to be the most fool-proof AI service.Microsoft’s Responsible AI Lifecycle is built on six key principles, namely:Define: Define the objectives, data requirements and responsible metrics.Envision: Consider the consequences and potential risks by continually analysing and improving.Prototype: Build prototypes based on data, models and experience, and test frequently.Build: Build and integrate AI according to responsible metrics and trade-offs.Launch: Launch only after diverse ring-testing with escalation and recovery planEvolve: By continuously analysing and improving.Microsoft is leading the way with detailed guidelines to help teams put responsible AI into practice. Their Guidelines for Human-AI Interaction recommend best practices for how AI systems should behave upon initial interaction, during regular interaction, when they’re inevitably wrong, and over time. They are to be used throughout the design process as existing ideas are evaluated, new ideas are being brainstormed, and collaboration undertaken across multiple disciplines in creating AI.In addition, there are several types of guidelines given to engineering teams including conversational AI guidelines, inclusive design guidelines, an AI fairness checklist, and an AI security engineering guidance.All guidelines are designed to help teams anticipate and address potential issues throughout the software development lifecycle to mitigate security, risks, and ethics issues.Principles to practicesAI is already having an enormous and positive impact on healthcare, the environment, and a host of other societal needs. These rapid advances have given rise to an industry debate about how the world should (or shouldn’t) use these new capabilities. As these systems become increasingly important to our lives, it is critical that when they fail that we understand how and why, whether it is inherent design of a system or the result of an adversary. In conclusion, Dr. Franklin emphasized the need for enterprises to understand how bias can be introduced and affect recommendations. Attracting a diverse pool of AI talent across the organization is critical to develop analytical techniques to detect and eliminate bias, he stressed.We hope Dr. Franklin's webinar and this article have helped frame the debate on responsible AI and provided us with a set of principles we can anchor on, and a set of actions we can take to advance the promise of AI in ways that don’t cause harm to people.

A Guide to Using AI Responsibly

5K
A Guide to Using AI Responsibly

"The artificial intelligence (AI) that we develop is impacting people directly or indirectly. And often, they don’t deserve the consequences of those impacts, nor have they asked for it. It is incumbent upon us to make sure that we are doing the right thing”.

- Dr. Anthony Franklin, Senior Data Scientist and AI Engineer, Microsoft


Digitally addressing a live global audience in a recent webinar on the topic of ‘Responsible AI’, Dr. Anthony Franklin, a senior data science expert and AI evangelist from Microsoft, spoke about the challenges that society faces from the ever-evolving AI and how the inherent biased nature of humans is reflected in technology.

Drawing from his experience in machine learning, risk analytics, analytics model management in government as well as data warehouse, Dr. Franklin shed light on the critical need to incorporate ethics in developing AI. Citing examples from various incidents that have taken place around the world, Dr. Franklin emphasized why it is critical for us to have an uncompromising approach towards using AI responsibly. He talked about the human (over)indulgence in technology, the challenges that society faces from the ever-evolving AI and how the inherent biased nature of humans is reflected through technology.

The purpose of the talk and this article is to help frame the debate on responsible AI with a set of principles we can anchor on, and a set of actions we can all take to advance the promise of AI in ways that don’t cause harm to people. In this article, we present key insights from the webinar along with the video for you to follow along.

KnowledgeHut webinar on Responsible AI by Dr. Anthony Franklin, Microsoft

What is the debate about?

These are times when we can expect to see policemen on the streets wearing AI glasses, viewing, and profiling the public. Military organizations today, can keep an eye on the public. Besides, a simple exercise of googling the word CEO, would result in pages and pages showing white men.

Police using AI glasses for public surveillance in ChinaThese are just some of the examples of the unparalleled success we have achieved in technology coupled with the fact that the same technology has overlooked the basic ethics, moral and social.

Responsible AI is a critical global need

In a recent study conducted from among the top ten technologically advanced nations, nearly nine of ten organizations across countries have encountered ethical issues resulting from the use of AI.

Responsible AI is a critical global need

Artificial intelligence has captured our imagination and made many things we would have thought impossible only a few years ago seem commonplace today. But AI has also raised some challenging issues for society writ large. We are in a race to advance AI capabilities and everything is about collecting data. But, what is being done with the data?

Advancements in AI are different from other technologies because of the pace of innovation and its proximity to human intelligence – impacting us at a personal and societal level.

Impacting us at a personal and societal level.While there remains no end to this ever-ending road of development, the need for us to ensure an equally powerful framework has increased even more. The need for a responsible AI is a critical global need.

What developers are saying about ethics in AI

Stack Overflow carried out a couple of anonymous developer focused surveys in 2018. Some of the responses are a clear indication of how the machine is often so powerful. While we wish the answers were all "No", the actual answers are not too surprising.

1. What would the developers do if asked to write a code for an unethical purpose?

The majority (58.5 percent) stated they would clearly decline if they were to be approached to write code for an unethical purpose. Over a third (37 percent), however, said they would do if it met some specific criteria of theirs.

2. Who is ultimately responsible for the code which accomplishes something unethical?

When asked with whom the ultimate responsibility lies if their code were to be used to accomplish something unethical, nearly one fifth of the developers acknowledge that such a responsibility should lie with the developer who wrote the code. 23 percent of the developers stated that this accountability should lie with the person who came up with the idea. The majority (60 percent), however, felt that the senior management should be responsible for this.

3. Do the developers have an obligation to consider the ethical implications?

A significant majority (80 percent) acknowledged that developers have the obligation to consider ethical implications. 

Though in smaller numbers, the above studies show the ability of the developers to get involved in unethical activity and the tendency to brush off accountability. Thus, there is a great and growing need not just for developers, but also for the rest of us to work collectively to change these numbers.

The six basic principles of AI

Though ambiguous, the principles attached with the ethics of AI remain very much tangible. Following are the six basic principles of AI:

The six basic principles of AI

1. Fairness

Fairness (noun)
the state, condition, or quality of being fair, or free from bias or injustice; evenhandedness

Discrimination

One of the many services which Amazon provides today includes the same-day-shipping policy. The map below shows the reach of the policy in the top 6 metropolitans in the US.

Discrimination

Source: Bloomberg   

In the city of Boston, one can see the gaps, the places where the service is not provided. Coincidentally, these areas turned out to be areas inhabited by individuals belonging to the lower economic strata. In defence, the Amazon stated that the policy was meant primarily for regions with denser Amazon users. 

Whichever way this is seen, the approach still ends up being discriminatory.

We see examples of bias in search as well. When we search for “CEO” in Bing, we see that all pictures are pictures of mostly white men, creating the impression that there are no women CEOs.

CEOs

Racism

We see examples of bias across different applications of AI. An image of an Asian American was submitted for the purpose of renewing the passport. After analysing the subject, the application’s statement read “Subjects eyes are closed”.

Racism

This highlights the unintentional, but negatively impactful working of a data organization. It further goes on to show how an inherent bias held by humans, transcends into the technology we make.

An algorithm widely used in US hospitals to allocate healthcare to patients has been systematically discriminating against black people, a sweeping analysis has found.

Health-care AlgorithmsThe study, published in Science in October 2019, concluded that the algorithm was less likely to refer black people than white people who were equally sick, to programmes that aim to improve care for patients with complex medical needs. Hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the United States each year.

As a result, millions of black people have not been able to get equal medical treatment. To make things worse, data suggests that in some way or the other, the algorithms have been set up to make money.

In 2015, Google became one of the first to release a facial recognition programme. The system recognized the Caucasians perfectly well, but the same system identified a black person with an ape.

The New York Times

These examples of bias in technologies are not isolated from the society we live in. The society we live in has different forms of biases that may not consistent with a corporation’s values, but these biases may already be prevalent in their data sets.

With the widespread use of AI and statistical learning, such enterprises are at serious risk not only of spreading but also amplifying these biases in ways that they do not understand.

These examples demonstrate gross unfairness on multiple fronts, making it necessary for organizations to have a more diverse data in general.

2. Reliability and Safety

Reliability (noun)

the ability to be relied on or depended on, as for accuracy, honesty, or achievement.

Safety (noun)

the state of being safe; freedom from the occurrence or risk of injury, danger, or loss. the quality of averting or not causing injury, danger, or loss.

In the case of an autonomous vehicle, when can we as a consumer be 100% sure of our safety? Or can we ever be? How many miles does a car have to cover or how many people are to lose their lives before the assurance of the rest?

In the case of autonomous vehicles, how can we as consumers be 100 percent sure of our safety? Or can we ever be? How many miles does a car have to cover or how many people are to lose their lives before the assurance of the rest? These are just a few of the questions a company must answer before establishing themselves as a reliable organization.

A project from scientists in the UK and India shows one possible use for automated surveillance technology to identify violent behavior in crowds with the help of camera-equipped drones.

In a paper titled “Eye in the Sky,” the researchers used uses a simple Parrot AR quadcopter (which costs around $200) to transmit video footage over a mobile internet connection for real-time analysis. A figure from the paper showing how the software analyzes individuals poses and matches them to “violent” postures. The question is: how will this technology be used, and who will use it?

Researchers working in this field often note there is a huge difference between staged tests and real-world use-cases. Though this system is yet to prove itself, it is a clear illustration of the direction contemporary research is going.

Using AI to identify body poses is a common problem, with big tech companies like Facebook publishing significant research on the topic. Many experts agree that automated surveillance technologies are ripe for abuse by law enforcement and authoritarian governments.

3. Privacy and security

Privacy (noun)

the state of being apart from other people or concealed from their view; solitude; seclusion:

the state of being free from unwanted or undue intrusion or disturbance in one's private life or affairs; freedom to be let alone:

Security (noun)

freedom from danger, risk, etc.; safety.

freedom from care, anxiety, or doubt; well-founded confidence.

something that secures or makes safe; protection; defense.

Strava’s heat map revealed military bases around the world and exposed soldiers to real danger – this is not AI per se, but useful for a data discussion. A similar instance took place in Russia, too.

The iRobot’s latest Roomba’s i7+ Robovac maps users’ homes to let them customize the cleaning schedule. An integration with Google Assistant lets customers give verbal commands like, “OK Google, tell Roomba to clean the kitchen.” - this is voluntary action and needs user’s consent.

Roomba’s i7+ Robovac maps users’ homes to let them customize the cleaning schedule

In October 2018, the company admitted it had exposed the personal data of around 500,000 Google+ users, leading to the closure of the platform. It also announced it was reviewing access to Gmail by third-party companies after it was revealed that many developers were reading and analyzing users’ personal mail for marketing and data mining.

A 2012 New York Times article, spoke about a father who found himself in the uncomfortable position of having to apologize to a Target employee. Earlier, he had stormed into a store near Minneapolis and complained to the manager that his daughter was receiving coupons for cribs and baby clothes in the mail. It turned out that Target knew his teen daughter better than he did. She was pregnant and Target knew this before her dad did.

By crawling the teen’s data, statisticians at Target were able to identify about 25 products that, when analysed together, allowed them to assign each shopper a “pregnancy prediction” score. More importantly, they could also estimate her due date to within a small window, so they could send coupons timed to very specific stages of her pregnancy.

Forbes

There was another instance reported in Canada of a mall using facial recognition software in their directories June to track shoppers' ages and genders without telling them.

4. Inclusiveness

Inclusiveness (adjective)

including or encompassing the stated limit or extremes in consideration or account (usually used postpositively)

including a great deal, or encompassing everything concerned; comprehensive

In the K.W vs Armstrong case, the plaintiffs were vulnerable adults living in Idaho, facing various psychological and developmental disabilities. They complained to the court when the Idaho Department of Health and Welfare reduced their medical assistance budget by a whopping 42%.

The Idaho Department of Health and Welfare claimed that the reasons for the cuts were “trade secrets” and refused to disclose the algorithm it used to calculate the reductions.

K.W. v. Armstrong plaintiff, Christie Mathwig

Once a system is found to be discriminatory or otherwise inaccurate, there is an additional challenge in redesigning the system. Ideally, government agencies should develop an inclusive redesign process that allows communities affected by algorithmic decision systems to meaningfully participate. But this approach is frequently met with resistance.

5. Transparency

Transparency (adjective)

having the property of transmitting rays of light through its substance so that bodies situated beyond or behind can be distinctly seen. 

admitting the passage of light through interstices. 

so sheer as to permit light to pass through; diaphanous. 

easily seen through, recognized, or detected

A company in New Orleans assisted the police officials to predict the individuals and their likelihood of committing crimes. This is the example of the usage of predictive analytics for policing strategies, carried out secretively.

In the Rich Caruana case study, 10 million patients data, and 1000’s of features were used to train a model on the data to predict the risk of pneumonia and decide whether patients must be sent to hospital. But was this model safe to deploy and use on real patients? Was the test data sufficient to make accurate predictions?

Unfortunately, a bunch of different machine learning models had been used to train an accurate black box, without knowing what was inside. Multitask neural net was thought to be the most accurate, but was the approach safe?

The pattern in the data, strictly speaking, was accurate. The good news was that the treatment was so effective that it lowered the risk of dying compared to the general population. However, the bad news was that if we used this model to make decisions about whether to admit the patient to the hospital, it would be dangerous to asthmatics and hence, not at all safe to use.

Not only is this an issue of safety, but also a case of violation of transparency. The key problem is that there are bad patterns we don’t know about. While neural net is more accurate and can learn things fast, one doesn’t know everything that the neural net is using. We really need to understand the model before we deploy it.

Now, through a technique called Generalized Additive Models, whereby the influence of individual attributes in the training data can be independently measured, a new model has been trained where the outputs are completely transparent, but actually improved performance over the old model.

Asthmatics were now being sent home sooner because they were rushed to the front of the line as soon as they arrived at the hospital. Faster and more targeted care led to better results. And all the model learned from were the results.

In another instance, one of the tools used by the New Orleans Police Department to identify members of gangs like 3NG and the 39ers came from the Silicon Valley company Palantir. The company provided software to a secretive NOPD program that traced people’s ties to other gang members, outlined criminal histories, analyzed social media, and predicted the likelihood that individuals would commit violence or become a victim.

As part of the discovery process in the trial, the government turned over more than 60,000 pages of documents detailing evidence gathered against him from confidential informants, ballistics, and other sources — but they made no mention of the NOPD’s partnership with Palantir.

6. Accountability

Accountability (adjective)

subject to the obligation to report, explain, or justify something; responsible; answerable. capable of being explained; explicable; explainable.

Like in the example of autonomous vehicles, in case of any mishap, where does the accountability lie? Who is to be blamed for the loss of lives or any sort of destruction in a driverless car?

With driverless cars, the question remains: Who is to blame?

It appears that the more advanced the technology, the faster it is losing its accountability. Be it a driverless car crashing or a robot killing a person, the question remains: who is to blame.

Whom does one sue if I were to get hit by a driverless car? What if a medical robot gives a patient the wrong drug? What if a vacuum robot sucks up one's hair while they are napping on the floor? And can a robot commit a war crime? Who gets to decide whether a person deserves certain treatment in an algorithm-based health care policy? Is it the organization which developed it or the developer who made it? There is a clear case of lack of accountability in such situations.

Liability of automated systems, the debate continues.The key word in the above-mentioned principles is impact. The consequence of any AI programming, intentional or unintentional, leaves a strong impact.

The responsible AI lifecycle

Both the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) published ethics guidelines for computer scientists in the early 1990s. More recently, we have seen countless social scientists and STS researchers sounding the alarm about technology’s potential to harm people and society.

To turn talk about responsible AI into action, organisations need to make sure that their use of AI fulfils several criteria. After defining the basic AI principles, an organization can develop a prototype. But they must be open to change even after launching what they assume to be the most fool-proof AI service.

Microsoft’s Responsible AI Lifecycle is built on six key principles, namely:

  1. Define: Define the objectives, data requirements and responsible metrics.
  2. Envision: Consider the consequences and potential risks by continually analysing and improving.
  3. Prototype: Build prototypes based on data, models and experience, and test frequently.
  4. Build: Build and integrate AI according to responsible metrics and trade-offs.
  5. Launch: Launch only after diverse ring-testing with escalation and recovery plan
  6. Evolve: By continuously analysing and improving.

Responsible AI Lifecycle. Source: Microsoft

Microsoft is leading the way with detailed guidelines to help teams put responsible AI into practice. Their Guidelines for Human-AI Interaction recommend best practices for how AI systems should behave upon initial interaction, during regular interaction, when they’re inevitably wrong, and over time. They are to be used throughout the design process as existing ideas are evaluated, new ideas are being brainstormed, and collaboration undertaken across multiple disciplines in creating AI.

In addition, there are several types of guidelines given to engineering teams including conversational AI guidelines, inclusive design guidelines, an AI fairness checklist, and an AI security engineering guidance.

All guidelines are designed to help teams anticipate and address potential issues throughout the software development lifecycle to mitigate security, risks, and ethics issues.

Principles to practices

AI is already having an enormous and positive impact on healthcare, the environment, and a host of other societal needs. These rapid advances have given rise to an industry debate about how the world should (or shouldn’t) use these new capabilities. 

As these systems become increasingly important to our lives, it is critical that when they fail that we understand how and why, whether it is inherent design of a system or the result of an adversary. 

In conclusion, Dr. Franklin emphasized the need for enterprises to understand how bias can be introduced and affect recommendations. Attracting a diverse pool of AI talent across the organization is critical to develop analytical techniques to detect and eliminate bias, he stressed.

We hope Dr. Franklin's webinar and this article have helped frame the debate on responsible AI and provided us with a set of principles we can anchor on, and a set of actions we can take to advance the promise of AI in ways that don’t cause harm to people.

KnowledgeHut

KnowledgeHut

Author

KnowledgeHut is an outcome-focused global ed-tech company. We help organizations and professionals unlock excellence through skills development. We offer training solutions under the people and process, data science, full-stack development, cybersecurity, future technologies and digital transformation verticals.
Website : https://www.knowledgehut.com

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Regression Analysis And Its Techniques in Data Science

As a Data Science enthusiast, you might already know that a majority of business decisions these days are data-driven. However, it is essential to understand how to parse through all the data. One of the most important types of data analysis in this field is Regression Analysis. Regression Analysis is a form of predictive modeling technique mainly used in statistics. The term “regression” in this context, was first coined by Sir Francis Galton, a cousin of Sir Charles Darwin. The earliest form of regression was developed by Adrien-Marie Legendre and Carl Gauss - a method of least squares. Before getting into the what and how of regression analysis, let us first understand why regression analysis is essential. Why is regression analysis important? The evaluation of relationship between two or more variables is called Regression Analysis. It is a statistical technique.  Regression Analysis helps enterprises to understand what their data points represent, and use them wisely in coordination with different business analytical techniques in order to make better decisions. Regression Analysis helps an individual to understand how the typical value of the dependent variable changes when one of the independent variables is varied, while the other independent variables remain unchanged.  Therefore, this powerful statistical tool is used by Business Analysts and other data professionals for removing the unwanted variables and choosing only the important ones. The benefit of regression analysis is that it allows data crunching to help businesses make better decisions. A greater understanding of the variables can impact the success of a business in the coming weeks, months, and years in the future.  Data Science The regression method of forecasting, as the name implies, is used for forecasting and for finding the casual relationship between variables. From a business point of view, the regression method of forecasting can be helpful for an individual working with data in the following ways: Predicting sales in the near and long term. Understanding demand and supply. Understanding inventory levels. Review and understand how variables impact all these factors. However, businesses can use regression methods to understand the following: Why did the customer service calls drop in the past months? How the sales will look like in the next six months? Which ‘marketing promotion’ method to choose? Whether to expand the business or to create and market a new product. The ultimate benefit of regression analysis is to determine which independent variables have the most effect on a dependent variable. It also helps to determine which factors can be ignored and those that should be emphasized. Let us now understand what regression analysis is and its associated variables. What is regression analysis?According to the renowned American mathematician John Tukey, “An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem". This is precisely what regression analysis strives to achieve.  Regression analysis is basically a set of statistical processes which investigates the relationship between a dependent (or target) variable and an independent (or predictor) variable. It helps assess the strength of the relationship between the variables and can also model the future relationship between the variables. Regression analysis is widely used for prediction and forecasting, which overlaps with Machine Learning. On the other hand, it is also used for time series modeling and finding causal effect relationships between variables. For example, the relationship between rash driving and the number of road accidents by a driver can be best analyzed using regression.  Let us now understand regression with an example. Meaning of RegressionLet us understand the concept of regression with an example. Consider a situation where you conduct a case study on several college students. We will understand if students with high CGPA also get a high GRE score. Our first job is to collect the details of the GRE scores and CGPAs of all the students of a college in a tabular form. The GRE scores and the CGPAs are listed in the 1st and 2nd columns, respectively. To understand the relationship between CGPA and GRE score, we need to draw a scatter plot.  Here, we can see a linear relationship between CGPA and GRE score in the scatter plot. This indicates that if the CGPA increases, the GRE scores also increase. Thus, it would also mean that a student with a high CGPA is likely to have a greater chance of getting a high GRE score. However, if a question arises like “If the CGPA of a student is 8.51, what will be the GRE score of the student?”. We need to find the relationship between these two variables to answer this question. This is the place where Regression plays its role. In a regression algorithm, we usually have one dependent variable and one or more than one independent variable where we try to regress the dependent variable "Y" (in this case, GRE score) using the independent variable "X" (in this case, CGPA). In layman's terms, we are trying to understand how the value of "Y" changes concerning the change in "X". Let us now understand the concept of dependent and independent variables. Dependent and Independent variables In data science, variables refer to the properties or characteristics of certain events or objects. There are mainly two types of variables while performing regression analysis which is as follows: Independent variables – These variables are manipulated or are altered by researchers whose effects are later measured and compared. They are also referred to as predictor variables. They are called predictor variables because they predict or forecast the values of dependent variables in a regression model. Dependent variables – These variables are the type of variable that measures the effect of the independent variables on the testing units. It is safer to say that dependent variables are completely dependent on them. They are also referred to as predicted variables. They are called because these are the predicted or assumed values by the independent or predictor variables. When an individual is looking for a relationship between two variables, he is trying to determine what factors make the dependent variable change. For example, consider a scenario where a student's score is a dependent variable. It could depend on many independent factors like the amount of study he did, how much sleep he had the night before the test, or even how hungry he was during the test.  In data models, independent variables can have different names such as “regressors”, “explanatory variable”, “input variable”, “controlled variable”, etc. On the other hand, dependent variables are called “regressand,” “response variable”, “measured variable,” “observed variable,” “responding variable,” “explained variable,” “outcome variable,” “experimental variable,” or “output variable.” Below are a few examples to understand the usage and significance of dependent and independent variables in a wider sense: Suppose you want to estimate the cost of living of a person using a regression model. In that case, you need to take independent variables as factors such as salary, age, marital status, etc. The cost of living of a person is highly dependent on these factors. Thus, it is designated as the dependent variable. Another scenario is in the case of a student's poor performance in an examination. The independent variable could be factors, for example, poor memory, inattentiveness in class, irregular attendance, etc. Since these factors will affect the student's score, the dependent variable, in this case, is the student's score.  Suppose you want to measure the effect of different quantities of nutrient intake on the growth of a newborn child. In that case, you need to consider the amount of nutrient intake as the independent variable. In contrast, the dependent variable will be the growth of the child, which can be calculated by factors such as height, weight, etc. Let us now understand the concept of a regression line. What is the difference between Regression and Classification?Regression and Classification both come under supervised learning methods, which indicate that they use labelled training datasets to train their models and make future predictions. Thus, these two methods are often classified under the same column in machine learning.However, the key difference between them is the output variable. In regression, the output tends to be numerical or continuous, whereas, in classification, the output is categorical or discrete in nature.  Regression and Classification have certain different ways to evaluate the predictions, which are as follows: Regression predictions can be interpreted using root mean squared error, whereas classification predictions cannot.  Classification predictions can be evaluated using accuracy, whereas, on the other hand, regression predictions cannot be evaluated using the same. Conclusively, we can use algorithms like decision trees and neural networks for regression and classification with small alterations. However, some other algorithms are more difficult to implement for both problem types, for example, linear regression for regressive predictive modeling and logistic regression for classification predictive modeling. What is a Regression Line?In the field of statistics, a regression line is a line that best describes the behaviour of a dataset, such that the overall distance from the line to the points (variable values) plotted on a graph is the smallest. In layman's words, it is a line that best fits the trend of a given set of data.  Regression lines are mainly used for forecasting procedures. The significance of the line is that it describes the interrelation of a dependent variable “Y” with one or more independent variables “X”. It is used to minimize the squared deviations of predictions.  If we take two variables, X and Y, there will be two regression lines: Regression line of Y on X: This gives the most probable Y values from the given values of X. Regression line of X on Y: This gives the most probable values of X from the given values of Y. The correlation between the variables X and Y depend on the distance between the two regression lines. The degree of correlation is higher if the regression lines are nearer to each other. In contrast, the degree of correlation will be lesser if the regression lines are farther from each other.  If the two regression lines coincide, i.e. only a single line exists, correlation tends to be either perfect positive or perfect negative. However, if the variables are independent, then the correlation is zero, and the lines of regression will be at right angles.  Regression lines are widely used in the financial sector and business procedures. Financial Analysts use linear regression techniques to predict prices of stocks, commodities and perform valuations, whereas businesses employ regressions for forecasting sales, inventories, and many other variables essential for business strategy and planning. What is the Regression Equation? In statistics, the Regression Equation is the algebraic expression of the regression lines. In simple terms, it is used to predict the values of the dependent variables from the given values of independent variables.  Let us consider one regression line, say Y on X and another line, say X on Y, then there will be one regression equation for each regression line: Regression Equation of Y on X: This equation depicts the variations in the dependent variable Y from the given changes in the independent variable X. The expression is as follows: Ye = a + bX Where,  Ye is the dependent variable, X is the independent variable, a and b are the two unknown constants that determine the position of the line. The parameter “a” indicates the distance of a line above or below the origin, i.e. the level of the fitted line, whereas parameter "b" indicates the change in the value of the independent variable Y for one unit of change in the dependent variable X. The parameters "a" and "b" can be calculated using the least square method. According to this method, the line needs to be drawn to connect all the plotted points. In mathematical terms, the sum of the squares of the vertical deviations of observed Y from the calculated values of Y is the least. In other words, the best-fitted line is obtained when ∑ (Y-Ye)2 is the minimum. To calculate the values of parameters “a” and “b”, we need to simultaneously solve the following algebraic equations: ∑ Y = Na + b ∑ X ∑ XY = a ∑ X + b ∑ X2 Regression Equation of X on Y: This equation depicts the variations in the independent variable Y from the given changes in the dependent variable X. The expression is as follows: Xe = a + bY  Where,  Xe is the dependent variable, Y is the independent variable, a and b are the two unknown constants that determine the position of the line. Again, in this equation, the parameter “a” indicates the distance of a line above or below the origin, i.e. the level of the fitted line, whereas parameter "b" indicates the slope, i.e. change in the value of the dependent variable X for a unit of change in the independent variable Y. To calculate the values of parameters “a” and “b” in this equation, we need to simultaneously solve the following two normal equations: ∑ X = Na + b ∑ Y ∑ XY = a ∑ Y + b ∑ Y2 Please note that the regression lines can be completely determined only if we obtain the constant values “a” and “b”. How does Linear Regression work?Linear Regression is a Machine Learning algorithm that allows an individual to map numeric inputs to numeric outputs, fitting a line into the data points. It is an approach to modeling the relationship between one or more variables. This allows the model to able to predict outputs. Let us understand the working of a Linear Regression model using an example. Consider a scenario where a group of tech enthusiasts has created a start-up named Orange Inc. Now, Orange has been booming since 2016. On the other hand, you are a wealthy investor, and you want to know whether you should invest your money in Orange in the next year or not. Let us assume that you do not want to risk a lot of money, so you buy a few shares. Firstly, you study the stock prices of Orange since 2016, and you see the following figure: It is indicative that Orange is growing at an amazing rate where their stock price has gone from 100 dollars to 500 dollars in only three years. Since you want your investment to boom along with the company's growth, you want to invest in Orange in the year 2021. You assume that the stock price will fall somewhere around $500 since the trend will likely not go through a sudden change. Based on the information available on the stock prices of the last couple of years, you were able to predict what the stock price is going to be like in 2021.  You just inferred your model in your head to predict the value of Y for a value of X that is not even in your knowledge. This mental method you undertook is not accurate anyway because you were not able to specify what exactly will be the stock price in the year 2021. You just have an idea that it will probably be above 500 dollars. This is where Regression plays its role. The task of Regression is to find the line that best fits the data points on the plot so that we can calculate where the stock price is likely to be in the year 2021.  Let us examine the Regression line (in red) by understanding its significance. By making some alterations, we obtained that the stock price of Orange is likely to be a little higher than 600 dollars by the year 2021. This example is quite oversimplified, so let us examine the process and how we got the red line on the next plot. Training the Regressor The example mentioned above is an example of Univariate Linear Regression since we are trying to understand the change in an independent variable X to one dependent variable, Y. Any regression line on a plot is based on the formula: f(X) = MX + B  Where, M is the slope of the line, B is the y-intercept that allows the vertical movement of the line, And X is the function’s input variable. In the field of Machine Learning, the formula is as follows: h(X) = W0 + W1X  Where, W0 and W1 are the weights, X is the input variable, h(X) is the label or the output variable. Regression works by finding the weights W0 and W1 that lead to the best-fitting line for the input variable X. The best-fitted line is obtained in terms of the lowest cost. Now, let us understand what does cost means here. The cost functionDepending upon the Machine Learning application, the cost could take different forms. However, in a generalized view, cost mainly refers to the loss or error that the regression model yields in its distance from the original training dataset. In a Regression model, the cost function is the Squared Error Cost: J(W0,W1) = (1/2n) Σ { (h(Xi) - Ti)2} for all i =1 until i = n    Where, J(W0, W1) is the total cost of the model with weights W0 and W1, h(Xi) is the model’s prediction of the independent variable Y at feature X with index  i, Ti is the actual y-value at index i, and n refers to the total number of data points in the dataset. The cost function is used to obtain the distance between the y-value the model predicted and the actual y-value in the data set. Then, the function squares this distance and divides it by the number of data points, resulting in the average cost. The 2 in the term ‘(1/2n)’ is merely to make the differentiation process in the cost function easier.  Training the dataset Training a regression model uses a Learning Algorithm to find the weights W0 and W1 that will minimize the cost and plug them into the straight-line function to obtain the best-fitted line. The pseudo-code for the algorithm is as follows: Repeat until convergence {      temp0 := W0 - a.((d/dW0) J(W0,W1))      temp1 := W1 - a.((d/dW1) J(W0,W1))      W0 = temp0      W1 = temp1  } Here, (d/dW0) and (d/dW1) refer to the partial derivatives of J(W0,, W1) concerning W0, and W1 respectively.  The gist of the partial differentiation is basically the derivatives: (d/dW0) J(W0,W1) = W0 + W1.X - T (d/dW1) j(W0,W1) = (W0 + W1.X - T).X Implementing the Gradient Descent Learning algorithm will result in a model with minimum cost. The weights that led to the minimum cost are dealt with as the final values for the line function h(X) = W0 + W1X.  Goodness-of-Fit in a Regression Model The Regression Analysis is a part of the linear regression technique. It examines an equation that lessens the distance between the fitted line and all data points. Determining how well the model fits the data is crucial in a linear model. The general idea is that if the deviations between the observed values and the predicted values of the linear model are small and unbiased, the model has well-fit data.  In technical terms, “Goodness-of-fit” is a mathematical model describing the differences between the observed and expected values or how well the model fits a set of observations. This measure can be used in statistical hypothesis testing.How do businesses use Regression Analysis? Regression Analysis is a statistical technique used to evaluate the relationship between two or more independent variables. Organizations use regression analysis to understand the significance of their data points and use analytical techniques to make better decisions.Business Analysts and Data Professionals use this statistical tool to delete unwanted variables and select the significant ones. There are numerous ways that businesses use regression analysis. Let us discuss some of them below. 1. Decision-makingBusinesses need to make better decisions to run smoothly and efficiently, and it is also necessary to understand the effects of the decision taken. They collect data on various factors such as sales, investments, expenditures, etc. and analyze them for further improvements. Organizations use the Regression Analysis method by making sense of the data and gathering meaningful insights. Business analysts and data professionals use this method to make strategic business decisions.2. Optimization of business The main role of regression analysis is to convert the collected data into actionable insights. The old-school techniques like guesswork and assuming a hypothesis have been eliminated by organizations. They are now focusing on adopting data-driven decision-making techniques, which improves the work performance in an organization. This analysis helps the management sectors in an organization to take practical and smart decisions. The huge volume of data can be interpreted and understood to gain efficient insights. 3. Predictive Analysis Businesses make use of regression analysis to find patterns and trends. Business Analysts build predictions about future trends using historical data. Regression methods can also go beyond predicting the impact on immediate revenue. Using this method, you can forecast the number of customers willing to buy a service and use that data to estimate the workforce needed to run that service. Most insurance companies use regression analysis to calculate the credit health of their policyholders and the probable number of claims in a certain period. Predictive Analysis helps businesses to: Minimize costs Minimize the number of required tools Provide fast and efficient results Detect fraud Risk Management Optimize marketing campaigns 4. Correcting errors Regression Analysis is not only used for predicting trends, but it is also useful to identify errors in judgements. Let us consider a situation where the executive of an organization wants to increase the working hours of its employees and make them work extra time to increase the profits. In such a case, regression analysis analyses all the variables and it may conclude that an increase in the working hours beyond their existing time of work will also lead to an increase in the operation expense like utilities, accounting expenditures, etc., thus leading to an overall decrease in the profit.   Regression Analysis provides quantitative support for better decision-making and helps organizations minimize mistakes. 5. New Insights Organizations generate a large amount of cluttered data that can provide valuable insights. However, this vast data is useless without proper analysis. Regression analysis is responsible for finding a relationship between variables by discovering patterns not considered in the model. For example, analyzing data from sales systems and purchase accounts will result in market patterns such as increased demand on certain days of the week or at certain times of the year. You can maintain optimal stock and personnel using the information before a demand spike arises. The guesswork gets eliminated by data-driven decisions. It allows companies to improve their business performance by concentrating on the significant areas with the highest impact on operations and revenue. Use cases of Regression AnalysisPharmaceutical companies Pharmaceutical organizations use regression analysis to analyze the quantitative stability data for the retest period or estimate shelf life. In this method, we find the nature of the relationship between an attribute and time. We determine whether the data should be transformed for linear regression analysis or non-linear regression analysis using the analyzed data. FinanceThe simple linear regression technique is also called the Ordinary Least Squares or OLS method. This method provides a general explanation for placing the line of the best fit among the data points.  This particular tool is used for forecasting and financial analysis. You can also use it with the Capital Asset Pricing Model (CAPM), which depicts the relationship between the risk of investing and the expected return. Credit Card Credit card companies use regression analysis to analyze various factors such as customer's risk of credit default, prediction of credit balance, expected consumer behaviour, and so on. With the help of the analyzed information, the companies apply specific EMI options and minimize the default among risky customers. When Should I Use Regression Analysis? Regression Analysis is mainly used to describe the relationships between a set of independent variables and the dependent variables. It generates a regression equation where the coefficients correspond to the relationship between each independent and dependent variable.  Analyze a wide variety of relationships You can use the method of regression analysis to perform many things, for example: To model multiple independent variables. Include continuous and categorical variables. Use polynomial terms for curve fitting. Evaluate interaction terms to examine whether the effect of one independent variable is dependent on the value of another variable.  Regression Analysis can untangle very critical problems where the variables are entwined. Consider yourself to be a researcher studying any of the following: What impact does socio-economic status and race have on educational achievement? Do education and IQ affect earnings? Impact of exercise habits and diet affect weight. Do drinking coffee and smoking cigarettes reduce the mortality rate? Does a particular exercise have an impact on bone density? These research questions create a huge amount of data that entwines numerous independent and dependent variables and question their influence on each other. It is an important task to untangle this web of related variables and find out which variables are statistically essential and the role of each of these variables. To answer all these questions and rescue us in this game of variables, we need to take the help of regression analysis for all the scenarios. Control the independent variables Regression analysis describes how the changes in each independent variable are related to the changes in the dependent variable and how it is responsible for controlling every variable in a regression model. In the process of regression analysis, it is crucial to isolate the role of each variable. Consider a scenario where you participated in an exercise intervention study. You aimed to determine whether the intervention was responsible for increasing the subject's bone mineral density. To achieve an outcome, you need to isolate the role of exercise intervention from other factors that can impact the bone density, which can be the diet you take or any other physical activity. To perform this task, you need to reduce the effect of the unsupportive variables. Regression analysis estimates the effect the change in one dependent variable has on the dependent variables while all other independent variables are constant. This particular process allows you to understand each independent variable's role without considering the other variables in the regression model. Now, let us understand how regression can help control the other variables in the process. According to a recent study on the effect of coffee consumption on mortality, the initial results depicted that the higher the intake of coffee, the higher is the risk of death. However, researchers did not include the fact that most coffee drinkers smoke in their first model. After smoking was included in the model, the regression results were quite different from the initial results. It depicted that coffee intake lowers the risk of mortality while smoking increases it. This model isolates the role of each variable while holding the other variables constant. You can examine the effect of coffee intake while controlling the smoking factor. On the other hand, you can also look at smoking while controlling for coffee intake. This particular example shows how omitting a significant variable can produce misleading results and causes it to be uncontrolled. This warning is mainly applicable for observational studies where the effects of omitted significant variables can be unbalanced. This omitted variable bias can be minimized in a randomization process where true experiments tend to shell out the effects of these variables in an equal manner. What are Residuals in Regression Analysis? Residuals identify the deviation of observed values from the expected values. They are also referred to as error or noise terms. It gives an insight into how good our model is against the actual value, but there are no real-life representations of residual values. Calculating the real values of intercept, slope, and residual terms can be a complicated task. However, the Ordinary Least Square (OLS) regression technique can help us speculate on an efficient model.  The technique minimizes the sum of the squared residuals. With the help of the residual plots, you can check whether the observed error is consistent with stochastic error (differences between the expected and observed values must be random and unpredictable). What are the Linear model assumptions in Regression Analysis? Regression Analysis is the first step in the process of predictive modeling. It is quite easy to implement, and its syntax and parameters do not create any kind of confusion. However, the purpose of regression analysis is not just solved by running a single line of code. It is much more than that. The function plot(model_name) returns four plots in the R programming language. Each of these plots provides essential information about the dataset. Most beginners in the field are unable to trace the information. But once you understand these plots, you can bring important improvements to your regression model. For significant improvements in your regression model, it is also crucial to understand the assumptions you need to take in your model and how you can fix them if any assumption gets violated. The four assumptions that should be met before conducting linear regression are as follows:  Linear Relationship: A linear relationship exists between the independent variable, x, and the dependent variable, y.   Independence: The residuals in linear regression are independent. In other words, there is no correlation between consecutive residuals in time series data.  Homoscedasticity: Residuals have constant variance at every level of X.  Normality: The residuals of the model are normally distributed. Assumption 1: Linear Relationships Explanation The first assumption in Linear regression is that there is a linear relationship between the independent variable X and the dependent variable Y. How to determine if this assumption is met The quickest and easiest way to detect this assumption is by creating a scatter plot of X vs Y. By looking at the scatter plot, you can have a visual representation of the linear relationship between the two variables. If the points in the plot could fall along a straight line, then there exists some type of linear relationship between the variables, and this assumption is met. For example, consider this first plot below. The points in the plot look like they fall roughly on a straight line, which indicates that there exists a linear relationship between X and Y: However, there doesn’t appear to be a linear relationship between X and Y in this second plot below:  And in this third plot, there appears to be a clear relationship between X and Y, but a linear relationship between:What to do if this assumption is violated If you create a scatter plot between X and Y and do not find any linear relationship between the two variables, then you can do two things: You can apply a non-linear transformation to the dependent or independent variables. Common examples might include taking the log, the square root, or the reciprocal of the independent and dependent variable. You can add another independent variable to the regression model. If the plot of X vs Y has a parabolic shape, then adding X2 as an additional independent variable in the linear regression model might make sense. Assumption 2: Independence Explanation The second assumption of linear regression is that the residuals should be independent. Its relevance can be seen while working with time-series data. In an ideal manner, a pattern among consecutive residuals is not what we want. For example, in a time series model, the residuals should not grow steadily along with time.  How to determine if this assumption is met To determine if this assumption is met, we need to have a scatter plot of residuals vs time and look at the residual time series plot. In an ideal plot, the residual autocorrelations should fall within the 95% confidence bands around zero, located at about +/- 2-over the square root on n, where n denotes the sample size.  You can also perform the Durbin-Watson test to formally examine if this assumption is met. What to do if this assumption is violated If this assumption is violated, you can do three things which are as follows: If there is a positive serial correlation, you can add lags of the independent variable or dependent variable to the regression model.  If there is a negative serial correlation, check that none of the variables has differences.  If there is a seasonal correlation, consider adding a seasonal dummy variable into your regression model.  Assumption 3: HomoscedasticityExplanation  The third assumption of linear regression is that the residuals should have constant variance at every level of X. This property is called homoscedasticity. When homoscedasticity is not present, the residuals suffer from heteroscedasticity. The outcome of the regression analysis becomes hard to trust when heteroscedasticity is present in the model. It increases the variance of the regression coefficient estimates, but the model does not recognize this fact. This makes the model declare that a term in the model is significantly crucial, but it is not. How to determine if this assumption is met To determine if this assumption is met, we need to have a scatter plot of fitted values vs residual plots. To achieve this, you need to fit a regression line into a data set.  Below is a scatterplot showing a typical fitted value vs residual plot in which heteroscedasticity is present: You can observe how the residuals become much more spread out as the fitted values get larger. The “cone” shape is a classic sign of heteroscedasticity:  What to do if this assumption is violated If this assumption is violated, you can do three things which are as follows: Transform the dependent variable: The most common transformation is simply taking the dependent variable's log. Consider if you are using population size as an independent variable to predict the number of flower shops in a city as the dependent variable. You need to use population size to predict the number of flower shops in a city. It causes heteroscedasticity to go away.  Redefine the dependent variable: One common way is to use a rate rather than the raw value. Consider the previous example. In that case, use population size to predict the number of flower shops per capita instead. This reduces the variability that naturally occurs among larger populations.  Use weighted regression: The third way to fix heteroscedasticity is to use weighted regression. In this regression method, we assign a weight to each data point depending on the variance of its fitted value, giving small weights to data points having higher variances, which shrinks their squared residuals. When the proper weights are used, the problem of heteroscedasticity gets eradicated. Assumption 4: Normality Explanation We need to take the last assumption that the residuals should be normally distributed. How to determine if this assumption is met To determine if this assumption is met, there are two common ways to achieve that: 1. Use Q-Q plots to examine the assumption visually. Also known as the quantile-quantile plot, it is used to determine whether or not the residuals of the regression model follow a normal distribution. The normality assumption is achieved if the points on the plot roughly form a straight diagonal line as follows: However, this Q-Q plot below shows when the residuals clearly deviate from a straight diagonal line, they do not follow a normal distribution:  2. Some other formal statistical tests to check the normality assumption are Shapiro-Wilk, Kolmogorov-Smirnov, Jarque-Barre, and D'Agostino-Pearson.  These tests however have a limitation as they are used only when there are large sample sizes and it often results that the residuals are not normal. Therefore, graphical techniques like Q-Q plots are easier to check the normality assumption and are also more preferable. What to do if this assumption is violatedIf this assumption is violated, you can do two things which are as follows: Firstly, examine if outliers are present and exist, make sure they are real values and aren’t data errors. Also, verify that any outliers aren’t having a large impact on the distribution. Secondly, you can apply a non-linear transformation to the independent and/or dependent variables. Common examples include taking the log, the square root, or the reciprocal of the independent and/or dependent variable. How to perform a simple linear regression?The formula for a simple linear regression is: Y = B0 + B1X + e Where, Y refers to the predicted value of the dependent variable Y for any given value of the independent variable X. B0 denotes the intercept, i.e. the predicted value of y when the x is 0. B1 denotes the regression coefficient, i.e. how much we expect the value of y to change as the value of x increases. X refers to the independent variable, or the variable we expect is influencing y). e denotes the error estimate, i.e. how much variation exists in our regression coefficient estimate. The Linear regression model's task is to find the best-fitted line through the data by looking out for the regression coefficient B1 that minimizes the total error estimate e of the model. Simple linear regression in R R is a free statistical programming language that most data professionals use very powerful and widely. Let us consider a dataset of income and happiness that we will use to perform regression analysis.The first task is to load the income.data dataset into the R environment, and then generate a linear model describing the relationship between income and happiness by the command as follows: income.happiness.lm | t |) column displays the p-value, which tells us how probable we are to see the estimated effect of income on happiness considering the null hypothesis of no effect were true. We can reject the null hypothesis since the p-value is very low (p < 0.001), and finally, we can conclude that income has a statistically crucial effect on happiness. The most important thing here in the linear regression model is the p-value. In this example, it is quite significant (p < 0.001), which shows that this model is a good fit for the observed data. Presenting the results While presenting your results, you should include the regression coefficient, standard error of the estimate, and the p-value. You should also interpret your numbers so that readers can have a clear understanding of the regression coefficient: A significant relationship (p < 0.001) has been found between income and happiness (R2 = 0.71 ± 0.018), with a 0.71-unit increase in reported happiness for every $10,000 increase in income. For a simple linear regression, you can simply plot the observations on the x and y-axis of a scatter plot and then include the regression line and regression function.What is multiple regression analysis?Multiple Regression is an extension of simple linear regression and is used to estimate the relationship between two or more independent variables and one dependent variable. You can perform multiple regression analysis to know: The strength of the relationship between one or more independent variables and one dependent variable. For example, you can use it to understand whether the exam performance can be predicted based on revision time, test anxiety, lecture attendance, and gender.  The overall fit, i.e. variance of the model and the relative impact of each of the predictors to the total variance explained. For example, you might want to know how much of the variation in the student’s exam performance can be understood by revision time, test anxiety, lecture attendance, gender, and the relative impact of each independent variable in explaining the variance. How to perform multiple linear regression? The formula for multiple linear regression is: Y = B0 + B1X1 + … + BnXn + e Where, Y refers to the predicted value of the dependent variable Y for any given value of the independent variable X. B0 denotes the intercept, i.e. the predicted value of y when the x is 0. B1X1  denotes the regression coefficient (B1), i.e. how much we expect the value of Y to change as the value of X increases. ... does the same for all the independent variables we want to test. BnXn refers to the regression coefficient of the last independent variable e denotes the error estimate of the model, i.e. how much variation exists in our estimate of the regression coefficient. It is the task of the Multiple Linear regression model to find the best-fitted line through the data by calculating the following three things: The regression coefficients will lead to the least error in the overall multiple regression model. The t-statistic of the overall regression model. The associated p-value  The multiple regression model also calculates the t-statistic and p-value for each regression coefficient. Multiple linear regression in R Let us consider a dataset of the heart and other factors that affect the functioning of our heart to perform multiple regression analyses. The first task is to load the heart.data dataset into the R environment, and then generate a linear model describing the relationship between heart disease and biking to work by the command as follows: heart.disease.lm| t |) column displays the p-value, which tells us how probable we are to see the estimated effect of income on happiness considering the null hypothesis of no effect were true. We can reject the null hypothesis since the p-value is very low (p < 0.001), and finally, we can conclude that both - biking to work and smoking - have influenced rates of heart disease. The most important thing here in the linear regression model is the p-value. In this example, it is quite significant (p < 0.001), which shows that this model is a good fit for the observed data. Presenting the results While presenting your results, you should include the regression coefficient, standard error of the estimate, and the p-value. You should also interpret your numbers in the proper context so that readers can have a clear understanding of the regression coefficient:  In our survey of 500 towns, we found significant relationships between the frequency of biking to work and the frequency of heart disease and the frequency of smoking and heart disease (p < 0.001 for each). Specifically, we found a 0.2% decrease (± 0.0014) in the frequency of heart disease for every 1% increase in biking and a 0.178% increase (± 0.0035) in the frequency of heart disease for every 1% increase in smoking. For multiple linear regression, you can simply plot the observations on the X and Y-axis of a scatter plot and then include the regression line and regression function: In this example, we have calculated the predicted values of the dependent variable heart disease across the observed values for the percentage of people biking to work. However, to include the effect of smoking on the independent variable heart disease, we had to calculate the predicted values by holding the variable smoking as constant at the minimum, mean, and maximum observed smoking rates. What is R-squared in Regression Analysis? In data science, R-squared (R2) is the coefficient of determination or the coefficient of multiple determination in case of multiple regression.  In the linear regression model, R-squared acts as an evaluation metric to evaluate the scatter of the data points around the fitted regression line. It recognizes the percentage of variation of the dependent variable. R-squared and the Goodness-of-fit R-squared is the proportion of variance in the dependent variable that the independent variable can explain.The value of R-squared stays between 0 and 100%: 0% corresponds to a model that does not explain the variability of the response data around its mean. The mean of the dependent variable helps predict the dependent variable and the regression model. On the other hand, 100% corresponds to a model that explains all the variability of the response variable around its mean. If your value of R2  is large, you have a better chance of your regression model fitting the observations.Although you get essential insights about the regression model in this statistical measure, you should not depend on it for the complete assessment of the model. It lacks information about the relationship between the dependent and the independent variables. It also does not inform about the quality of the regression model. Hence, as a user, you should always analyze R2 and other variables and then derive conclusions about the regression model. Visual Representation of R-squared You can visually demonstrate the plots of fitted values by observed values in a graphical manner. It illustrates how R-squared values represent the scatter around the regression line.  As observed in the pictures above, the value of R-squared for the regression model on the left side is 17%, and for the model on the right is 83%. When the variance accounts to be high in a regression model, the data points tend to fall closer to the fitted regression line.  However, a regression model with an R2 of 100% is an ideal scenario that is impossible. In such a case, the predicted values equal the observed values, leading all the data points to fall exactly on the regression line.  Interpretation of R-squared The simplest interpretation of R-squared is how good the regression model fits the observed data values. Let us loot at an example to understand this. Consider a model where the  R2  value is 70%. This would mean that the model explains 70% of the fitted data in the regression model. Usually, when the R2  value is high, it suggests a better fit for the model. The correctness of the statistical measure does not only depends on R2. Still, it can depend on other several factors like the nature of the variables, the units on which the variables are measured, etc. So, a high R-squared value is not always likely for the regression model and can indicate problems too.A low R-squared value is a negative indicator for a model in general. However, if we consider the other factors, a low R2 value can also result in a good predictive model. Calculation of R-squared R- squared can be evaluated using the following formula:  Where: SSregression – Explained sum of squares due to the regression model. SStotal – The total sum of squares. The sum of squares due to regression assesses how well the model represents the fitted data. The total sum of squares measures the variability in the data used in the regression model.Now let us come back to the earlier situation where we have two factors: the number of hours of study per day and the score in a particular exam to understand the calculation of R-squared more effectively. Here, the target variable is represented by score and the independent variable by the number of study hours per day.  In this case, we will need a simple linear regression model and the equation of the model will be as follows:  ŷ = w1x1 + b  The parameters w1 and b can be calculated by reducing the squared error over all the data points. The following equation is called the least square function:minimize ∑(yi –  w1x1i – b) Now, R-squared calculates the amount of variance of the target variable explained by the model, i.e. function of the independent variable. However, to achieve that, we need to calculate two things: Variance of the target variable: var(avg) = ∑(yi – Ӯ)2 The variance of the target variable around the best-fit line: var(model) = ∑(yi – ŷ)2Finally, we can calculate the equation of R-squared as follows:  R2 = 1 – [var(model)/var(avg)] = 1 -[∑(yi – ŷ)2/∑(yi – Ӯ)2]    What are the different types of regression analysis?   Other than simple linear regression and multiple linear regression, there are mainly 5 types of regression techniques. Let us discuss them one by one.  Polynomial RegressionIn a polynomial regression technique, the power of the independent variable has to more than 1. The expression below shows a polynomial equation: y = a + bx2  In this regression technique, the best-fitted line is a curve line instead of a straight line that fits into the data points. An important point to keep in mind while performing polynomial regression is, if you try to fit a polynomial of a higher degree to get a lower error, it might result in overfitting.  You should always plot the relationships to see the fit and always make sure that the curve fits the nature of the problem. An example to illustrate how plotting can help: Logistic Regression The logistic regression technique is used when the dependent variable is discrete in nature. For example, 0 or 1, true or false, etc. The target variable in this regression can have only two values and the relation between the target variable and the independent variable is denoted by a sigmoid curve. To measure the relationship between the target variable and independent variables,  Logit function is used. The expression below shows a logistic equation: logit(p) = ln(p/(1-p)) = b0 + b1X1 + b2X2 + b3X3 …. + bkXk Where,  p denotes the probability of occurrence of the feature. Ridge Regression The Ridge Regression technique is usually used when there is a high correlation between the independent variables. This is because the least square estimates result in unbiased values when there are multi collinear data.  However, if the collinearity is very high, there exists some bias value. Therefore, it is crucial to introduce a bias matrix in the equation of Ridge Regression. This regression method is quite powerful where the model is less susceptible to overfitting. The expression below shows a ridge regression equation: β = (X^{T}X + λ*I)^{-1}X^{T}y The lambda (λ) in the equation solves the issue of multicollinearity. Lasso Regression Lasso Regression is one of the types of regression in machine learning that is responsible for performing regularization and feature selection. It restricts the absolute size of the regression coefficient, due to which the coefficient value gets nearer to zero.The feature selection method in Lasso Regression allows the selection of a set of features from the dataset to build the model. Only the required features are used in this regression, while others are made zero. This helps in avoiding overfitting in the model.  If the independent variables are highly collinear, then this regression technique takes only one variable and makes other variables shrink to zero. The expression below shows a lasso regression equation: N^{-1}Σ^{N}_{i=1}f(x_{i}, y_{I}, α, β) Bayesian RegressionIn the Bayesian Regression method, the Bayes theorem is used to determine the value of regression coefficients. In this linear regression technique, the posterior distribution of the features is evaluated other than finding the least-squares.  Bayesian Linear Regression collaborates with Linear Regression and Ridge Regression but is more stable than simple Linear Regression. What are the terminologies used in Regression Analysis? When trying to understand the outcome of regression analysis, it is important to understand the key terminologies used to acknowledge the information.  A comprehensive list of regression analysis terms used are described below: Estimator: An estimator is an algorithm for generating estimates of parameters when the relevant dataset is present. Bias: An estimate is said to be unbiased when its expectation is the same as the value of the parameter that is being estimated. On the other hand, if the expectation is the same as the value of the estimated parameter, it is said to be biased. Consistency: An estimator is consistent if the estimates it produces converge on the value of the true parameter considering the sample size increases without limit. For example, an estimator that produces estimates θ^ for some value of parameter θ, where ^ is a small number. If the estimator is consistent, we can make the probability as close to 1.0 or as small as we like by drawing a sufficiently large sample.  Efficiency: An estimator “A” is said to be more efficient than an estimator “B” when “A” has a smaller sampling variance, i.e. if the specific values of “A” are more tightly clustered around their expectation. Standard error of the Regression (SER): It is defined as estimating the standard deviation of the error term in a regression model. Standard error of regression coefficient: It is defined as estimating the standard deviation of the sampling distribution for a particular coefficient term. P-value: P-value is the probability when the null hypothesis is considered true, of drawing sample data that are as adverse to the null as the data drawn, or more so. When the p-value is small, there are two possibilities for that – firstly, a low-probability unrepresentative sample is drawn, or secondly, the null hypothesis is false. Significance level: For a hypothesis test, the significance test is the smallest p-value for which the null hypothesis is not rejected. If the significance level is 1%, the null is rejected if and only if the p-value for the test is less than 0.01. The significance level can also be defined as the probability of making a type 1 error, i.e. rejecting a true null hypothesis. Multicollinearity: It is a situation where there is a high degree of correlation among the independent variables in a regression mod. In other words, a situation where some of the X values are close to being linear combinations of other X values. Multicollinearity occurs due to large standard errors and when the regression model cannot produce precise parameter estimates. This problem mainly occurs while estimating causal influences.T-test: The t-test is a common test for the null hypothesis that Bi's particular regression parameter has some specific value. F-test: F-test is a method for jointly testing a set of linear restrictions on a regression model. Omitted variable bias: Omitted variable bias is a bias in estimating regression parameters. It generally occurs when a relevant independent variable is omitted from a model, and the omitted variable is correlated with one or more of the included variables. Log variables: It is a transformation method that allows the estimation of a non-linear model using the OLS method to exchange the natural log of a variable for the level of that variable. It is performed for the dependent variable and/or one or more independent variables. Quadratic terms: This is another common transformation method where both xi and x2i are included as regressors. The estimated effect of xi on y is calculated by finding the derivative of the regression equation concerning xi.  Interaction terms: These are the pairwise products of the "original" independent variables. The interaction terms allow for the possibility that the degree to which xi affects y depends on the value of some other variable Xj. For example, the effect of experience on wages xi might depend on the gender xj of the worker. What are the tips to avoid common problems working with regression analysis? Regression is a very powerful statistical analysis that offers high flexibility but presents a variety of potential pitfalls. Let us see some tips to overcome the most common problems whilst working with regression analysis.Tip 1:  Research Before Starting Before you start working with regression analysis, review the literature to understand the relevant variables, the relationships they have, and the expected coefficient signs and effect magnitudes. It will help you collect the correct data and allow you to implement the best regression equation.  Tip 2: Always prefer Simple Models Start with a simple model and then make it more complicated only when needed. When you have several models with different predictive abilities, always prefer the simplest model because it will be more likely to be the best model. Another significant benefit of simpler models is that they are easier to understand and explain to others.  Tip 3: Correlation Does Not Imply Causation  Always remember correlation doesn't imply causation. Causation is a completely different thing as compared to causation. In general, to establish causation, you need to perform a designed experiment with randomization. However, If you’re using regression analysis to analyze the uncollected data in an experiment, causation is uncertain.Tip 4: Include Graphs, Confidence, and Prediction Intervals in the Results   The presentation of your results can influence the way people interpret them. For instance, confidence intervals and statistical significance provide consistent information.  According to a study, statistical reports that refer only to statistical significance only bring about correct interpretations 40% of the time. On the other hand, when the results also include confidence intervals, the percentage rises to 95%. Tip 5: Check the Residual Plots Residual plots are the quickest and easiest method to examine the problems in a regression model and allow you to make adjustments. For instance, residual plots help display patterns when you cannot model curvature present in your data. Regression Analysis and The Real World  Let us summarize what we have covered in this article so far: Regression Analysis and its importance. Difference between regression and classification. Regression Line and Regression Equation. How companies use regression analysis When to use regression analysis. Assumptions in Regression Analysis. Simple and Multiple linear regression. R-squared: Representation, Interpretation, Calculation. Types of Regression. Terminologies used in Regression. How to avoid problems in regression. Regression Analysis is an interesting machine learning technique utilized extensively by enterprises to transform data into useful information. It continues to be a significant asset to many leading sectors starting from finance, education, banking, retail, medicine, media, etc.  
5627
Regression Analysis And Its Techniques in Data Sci...

As a Data Science enthusiast, you might already ... Read More

Top Job Roles With Their Salary Data in the World of Data Science for 2020–2022

Data Science requires the expertise of professionals who possess the skill of collecting, structuring, storing, handling and analyzing data, allowing individuals and organizations to make decisions based on insights generated from the data. Data science is woven into the fabric of our daily lives in myriad ways that we may not even be aware of; starting from the online purchases we make, our social media feeds, the music we listen to or even the movie recommendations that we are shown online.  For several years in a row, the job of a data scientist has been hailed as the “hottest job of the 21st century”. Data scientists are among the highest paid resources in the IT industry. According to Glassdoor, the average data scientist’s salary is $113,436. With the growth of data, the demand for data science job roles in companies has been rising at an accelerated pace.  How Data Science is a powerful career choice The landscape of a data science job is promising and full of opportunities spanning different industries. The nature of the job allows an individual to take on flexible remote jobs and also to be self-employed.  The field of data science has grown exponentially in a very short time, as companies have come to realize the importance of gathering huge volumes of data from websites, devices, social media platforms and other sources, and using them for business benefits. Once the data is made available, data scientists use their analytical skills, evaluate data and extract valuable information that allows organizations to enhance their innovations. A data scientist is responsible for collecting, cleansing, modifying and analyzing data into meaningful insights. In the first phase of their career, a data scientist generally works as a statistician or data analyst. Over many years of experience, they evolve to be data scientists.  The ambit of data has been increasing rapidly which has urged companies to actively recruit data scientists to harness and leverage insights from the huge quantities of valuable data available, enabling efficiency in processes and operations and driving sales and growth.  In the future, data may also emerge as the turning point of the world economy. So, pursuing a career in data science would be very useful for a computer enthusiast, not only because it pays well but also since it is the new trend in IT. According to the Bureau of Labor Statistics (BLS), jobs for computer and information research scientists, as well as data scientists are expected to grow by 15 percent by the year 2028. Who is a Data Scientist & What Do They Do? Data Scientists are people with integral analytical data expertise together with complex problem-solving skills, besides the curiosity to explore a wide range of emerging issues.  They are considered to be the best of both the sectors – IT and business, which makes them extremely skilled individuals whose job roles straddle the worlds of computer science, statistics, and trend analysis. Because of this surging demand for data identification and analysis in various tech fields like AI, Machine Learning, and Data Science, the salary of a data scientist is one of the highest in the world. Requisite skills for a data scientist Before we see the different types of jobs in the data analytics field, we must be aware of the prerequisite skills that make up the foundation of a data scientist: Understanding of data – As the name suggests, Data Science is all about data. You need to understand the language of data and the most important question you must ask yourself is whether you love working with data and crunching numbers. And if your answer is “yes”, then you’re on the right track. Understanding of algorithms or logic – Algorithms are a set of instructions that are given to a computer to perform a particular task. All Machine Learning models are based on algorithms, so it is quite an essential prerequisite for a would-be data scientist to understand the logic behind it.  Understanding of programming – To be an expert in data science, you do not need to be an expert coder. However, you should have the foundational programming knowledge which includes variables, constants, data types, conditional statements, IO functions, client/server, Database, API, hosting, etc. If you feel comfortable working with these and you have your coding skills sorted, then you’re good to go. Understanding of Statistics – Statistics is one of the most significant areas in the field of Data Science. You should be well aware of terminologies such as mean, median, mode, standard deviation, distribution, probability, Bayes’ theorem, and different Statistical tests like hypothesis testing, chi-square, ANOVA, etc. Understanding of Business domain – If you do not have an in-depth working knowledge of the business domain, it will not really prove to be an obstacle in your journey of being a data scientist. However, if you have the primitive understanding of the specific business area you are working for, it will be an added advantage that can take you ahead. Apart from all the above factors, you need to have good communication skills which will help the entire team to get on the same page and work well together.Data Science Job Roles  Data science experts are in demand in almost every job sector, and are not confined to the IT industry alone.  Let us look at some major job roles, associated responsibilities , and the salary range: 1. Data ScientistsA Data Scientist’s job is as exciting as it is rewarding. With the help of Machine Learning, they handle raw data and analyze it with various algorithms such as regression, clustering, classification, and so on. They are able to arrive at insights that are essential for predicting and addressing complex business problems.  Responsibilities of Data Scientists The responsibilities of Data Scientists are outlined below: Collecting huge amounts of organized and unorganized data and converting them into useful insights. Using analytical skills like text analytics, machine learning, and deep learning to identify potential solutions which will help in the growth of organizations. Following a data-driven approach to solve complex problems.  Enhancing data accuracy and efficiency by cleansing and validating data. Using data visualization to communicate significant observations to the organization’s stakeholders. Data Scientists Salary Range According to Glassdoor, the average Data Scientist salary is $113,436 per annum. The median salary of an entry-level professional can be around $95,000 per annum. However, early level data scientists with 1 to 4 years' experience can get around $128,750 per annum while the median salary for those with more experience ranging around 5 to 9 years  can rise to an average of $165,000 per annum. 2. Data Engineers  A Data Engineer is the one who is responsible for building a specific software infrastructure for data scientists to work. They need to have in-depth knowledge of technologies like Hadoop and Big Data such as MapReduce, Hive, and SQL. Half of the work of Data Engineers is Data Wrangling and it is advantageous if they have a software engineering background. Responsibilities of Data Engineers  The responsibilities of Data Engineers are described below: Collecting data from different sources and then consolidating and cleansing it. Developing essential software for extracting, transforming, and loading data using SQL, AWS, and Big Data. Building data pipelines using machine learning algorithms and statistical techniques. Developing innovative ways to enhance data efficiency and quality. Developing, testing and maintaining data architecture. Required Skills for Data Engineers  There are certain skill sets that data engineers need to have: Strong skills in analytics to manage and work with massive unorganized datasets. Powerful programming skills in trending languages like Python, Java, C++, Ruby, etc. Strong knowledge of database software like SQL and experience in relational databases. Managerial and organizational skills along with fluency in various databases.  Data Engineers’ Salary Range According to Glassdoor, the average salary of a Data Engineer is $102,864 in the USA. Reputed companies like Amazon, Airbnb, Spotify, Netflix, IBM value and pay high salaries to data engineers. Entry-level data and mid-range data engineers get an average salary between $110,000 and $137,770 per annum. However, with experience, a data engineer can get up to $155,000 in a year. 3. Data Analyst As the name suggests, the job of a Data Analyst is to analyze data. A data analyst collects, processes, and executes statistical data analyses which help business users to develop meaningful insights. This process requires creating systems using programming languages like Python, R or SAS. Companies ranging from IT, healthcare, automobile, finance, insurance employ Data Analysts to run their businesses efficiently.  Responsibilities of Data Analysts  The responsibilities of Data Analysts are described below: Identifying correlations and gathering valuable patterns through data mining and analyzing data. Working with customer-centric algorithms and modifying them to suit individual customer demands. Solving certain business problems by mapping data from numerous sources and tracing them. Creating customized models for customer-centric market strategies, customer tastes, and preferences. Conducting consumer data research and analytics by deploying statistical analysis. Data Analyst Salary Range According to Glassdoor, the national average salary of a Data Analyst is $62,453 in the United States. The salaries of an entry-level data analyst start at  $34,5000 per year or $2875 per month.  Glassdoor states that a junior data analyst earns around $70,000 per year and experienced senior data analysts can expect to be paid around $107,000 per year which is roughly $8916 per month. Key Reasons to Become a Data Scientist Becoming a Data Scientist is a dream for many data enthusiasts. There are some basic reasons for this: 1. Highly in-demand field The job of Data Science is hailed as one of the most sought after jobs for 2020 and according to an estimate, it is predicted that this field would generate around 11.5 million jobs by the year 2026. The demand for expertise in data science is increasing while the supply is too low.  This shortage of qualified data scientists has escalated their demand in the market. A survey by the MIT Sloan Management Review indicates that 43 percent of companies report that a major challenge to their growth has been a lack of data analytic skills. 2. Highly Paid & Diverse Roles Since data analytics form the central part of decision-making, companies are willing to hire larger numbers of data scientists who can help them to make the right decisions that will boost business growth. Since it is a less saturated area with a mid-level supply of talents, various opportunities have emerged that require diverse skill sets. According to Glassdoor, in the year 2016,  data science was the highest-paid field across industries. 3. Evolving workplace environments With the arrival of technologies like Artificial Intelligence and Robotics which fall under the umbrella of data science, a vast majority of manual tasks have been replaced with automation.  Machine Learning has made it possible to train machines to perform repetitive tasks , freeing up humans to focus on critical problems that need their attention. Many new and exciting technologies have emerged within this field such as Blockchain, Edge Computing, Serverless Computing, and others.  4. Improving product standards The rigorous use of Machine Learning algorithms for regression, classification recommendation problems like decision trees, random forest, neural networks, naive Bayes etc has boosted the customer experiences that companies desire to have. One of the best examples of such development is the E-commerce sites that use intelligent Recommendation Systems to refer products and provide customer-centric insights depending upon their past purchases. Data Scientists serve as a trusted adviser to such companies by identifying the preferred target audience and handling marketing strategies. 5. Helping the world In today’s world, almost everything revolves around data. Data Scientists extract hidden information from massive lumps of data which helps in decision making across industries ranging from finance and healthcare to manufacturing, pharma, and engineering. Organizations are equipped with data-driven insights that boost productivity and enhance growth, even as they optimize resources and mitigate potential risks. Data Science catalyzes innovation and research, bringing positive changes across the world we live in. Factors Affecting a Data Scientist’s Salary The salaries of Data Scientists can depend upon several factors. Let us study them one by one and understand their significance: Data Scientist Salary by Location The number of job opportunities and the national data scientist salary for data innovators is the highest in Switzerland in the year 2020, followed by the Netherlands and the United Kingdom. However, since Silicon Valley in the United States is the hub of new technological innovations, it is considered to generate the most jobs for startups in the world, followed by Bangalore in India. A data scientist’s salary in Silicon Valley or Bangalore is likely to be higher than in other countries. Below are the highest paying countries for data scientist roles along with their average annual data science salary: Switzerland$115,475Netherlands$68,880Germany$64,024United Kingdom$59,781Spain$30,050Italy$37,785Data Scientist Salary by ExperienceA career in the field of data science is very appealing to young IT professionals. Starting salaries are very lucrative, and there is incremental growth in salary  with  experience. Salaries of a data scientist depend on the expertise, as well as the years of experience: Entry-level data scientist salary – The median entry-level salary for a data scientist is around $95,000 per year which is quite high. Mid-level data scientist salary –   The median salary for a mid-level data scientist having experience of around 1 - 4 years is $128,750 per year. If the data scientist is in a managerial position, the average salary rises upto $185,000 per year. Experienced data scientist salary –  The median salary for an experienced data scientist having experience of around 5 - 9 years is $128,750 per year whereas the median salary of an experienced manager is much higher; around $250,000 per year. Data Scientist Salary by Skills There are some core competencies that will help you to shine in your career as a Data Scientist, and if you want to get the edge over your peers you should consider polishing up these skills: Python is the most crucial and coveted skill which data scientists must be familiar with, followed by R. The average salary in the US for  Python programmers is $120,365 per annum. If you are well versed in both Data Science and Big Data, instead of just one among them, your salary is likely to increase by at least 25 percent . The users of innovative technology like the Statistical Analytical System get a salary of around $77,842. On the other hand, users of software analysis software like SPSS have a pay scale of  $61,452 per year. Machine Learning Engineers on the average earn around $111,855 per year. However, with more experience in Machine Learning along with knowledge in Python, you can earn around $146,085 per annum. A Data Scientist with domain knowledge of Artificial Intelligence can earn an annual salary between $100,000 to $150,000. Extra skills in programming and innovative technologies have always been a value-add that can enhance your employability. Pick skills that are in-demand to see your career graph soar. Data Scientist Salary by Companies Some of the highest paying companies in the field of Data Science are tech giants like Facebook, Amazon, Apple, and service companies like McGuireWoods, Netflix or Airbnb.  Below is a list of top companies with the highest paying salaries: McGuireWoods$165,114Amazon$164,114Airbnb$154,879Netflix$147,617 Apple$144,490Twitter$144,341Walmart$144,198Facebook$143,189eBay$143,005Salaries of Other Related Roles Various other job roles associated with Data Science are also equally exciting and rewarding. Let us look at some of them and their salaries: Machine Learning Engineer$114,826Machine Learning Scientist$114,121Applications Architect$113,757Enterprise Architect$110,663Data Architect$108,278Infrastructure Architect$107,309Business Intelligence Developer$81,514Statistician$76,884ConclusionLet us look at what we have learned in this article so far: What is Data Science? The job of a Data Scientist Pre-requisite skills for a Data Scientist Different job roles Key reasons for becoming a Data Scientist Salary depending upon different factors Salary of other related roles The field of Data Science is ripe in terms of opportunities for Data Scientists, Data Engineers, and Data Analysts. The figures mentioned in this article are not set in stone and may vary depending upon the skills you possess, experience you have and various other factors. With more experience and skills, your salary is bound to increase by a certain percentage every year. Data science is a field that will revolutionize the world in the coming years and you can have a share of this very lucrative pie with the right education qualifications, skills, experience and training.  
6953
Top Job Roles With Their Salary Data in the World ...

Data Science requires the expertise of professio... Read More

How To Become A Data Aanalyst In 2022?

In 2022, Data Analysis has become one of the core functions in any organization. This is a highly sought-after role that has evolved immensely in the past few years. But what is Data Analysis?  What do Data Analysts do? How to become a Data Analyst in 2022? What are the skills one needs to have to be a Data Analyst? There are many such questions which strike our mind when we talk about this profession.Let's walk through the answers to all the questions to ensure we have a clear picture in our mind.What is Data analytics?Information collected from different sources is used to make informed decisions for the organization, and is analyzed for some specific goals through Data Analysis. Data Analysis is not only used for research and analysis; but it helps organizations learn more about their customers, develop marketing strategies and optimize product development, just to name a few areas where it makes an impact.To be precise, there are four types of Data Analytics:Descriptive Analytics: - In this type of analytics, analysts examine the past data like monthly sales, monthly revenue, website traffic and more to find out the trend. They then draft a description or a summary of the performance of the firm or website. This type of analytics uses arithmetic operations, mean, median, max, percentage and other statistical summaries.Diagnostic Analytics:- As the name suggests, here we diagnose the data and find out the reasons behind any particular trend, issue or scenario.  If a company is faced with any negative data then this type of analysis will help them to find out the main reasons/causes for the decline in the performance, against which decisions and actions can be taken.Predictive Analytics:-  This type of analytics helps in predicting the future outcome by analyzing the past data and trends. It will help companies to take proactive actions for the better outcomes. Not just this, but predictive analysis also helps us forecast the sales, demand, fraud, failures and set our budgets and other resources accordingly.  Prescriptive Analytics:- This type of analytics helps in determining what action the company should take next in response to the situation, to keep the business going and growing.Why do we need Data Analysts?Organizations across different fields or sectors rely on data analysis to take important decisions for the development of a new product, to forecast sales for the near future, or find out about entry into new markets or new customers to target. Data analysis is also used to analyze the business performance based on the present data and find out various inefficiencies in the organizations. Not only industries or businesses use data analysis, but it is also used by different political parties and other groups to find out about opportunities as well as challenges.What does a data analyst do?There are several functions which an analyst performs, but some of the functions may also depends on the type of business and organization. Generally, a data analyst performs the following responsibilities:Data collection from various sources like primary sources and secondary sources and arranging the data in a proper sequence.Cleaning and processing the data as per requirement. A data analyst may be required to treat missing values, clean invalid or wrong data and remove unwanted information.Using different kind of statistical tools like R, Python, SPSS or SAS, to interpret the data collected.Adjusting the data according to the upcoming trends or changes like seasonal trends and then making interpretations.Preparing a data analysis report.Identifying opportunities and threats from the analyzed data and apprising the same to the organization.Now that you know what areas a Data Analyst works on, let us move to the skills and knowledge you would require to get started in this field.What are the skills necessary to be a Data Analyst?Broadly, a data analyst needs to have two type of skills at a broader level:Technical skills - Knowledge of different technical languages and tools like R, SQL, Microsoft Excel, Tableau, Mathematical skills, statistical skills and data visualization skills. These technical skills would help an analyst actually use the data and visualize the final outcome in a form that could be beneficial for the firm. This may include tables, graphs, charts, and more.   Decision making – This is extremely necessary to present the outcome and take the executives through the various changes, trends, demand, and downfall. Deep analysis is required to be able to take logical, factual and beneficial decisions for the firm. Data analysts must have the ability to think strategically and get a 360 degree view of the situation, before suggesting the way forward.After acquiring the above mentioned skills, it is very much required to keep yourself updated with the latest trends in the industry, so a mindset of continuous learning is a must.How to become a data analyst in 2022?The year 2022 changed all the definitions of a business and its processes. COVID-19 put companies across the world in a tailspin, forcing them to rethink their business strategies in order to cope with the evolving challenges thrown up by the pandemic. Some companies that were market leaders in their domain were unable to cope, and many had to even close down. The question therefore arises, in such an uncertain scenario, with challenges around every corner, is it even prudent to consider stepping into the role of a Data Analyst at this juncture?   The answer to this is, “YES”. This is the best time to be a data analyst because organizations everywhere are looking for expert Analysts who can guide them in making the right decisions, helping the organization to survive through the pandemic and beyond. Data analysts can perform detailed sales forecasting, or carry out a complete market analysis to make the right predictions for future growth. Companies need to prepare smart strategies for sales and marketing, to survive and thrive in the long run.If you want to shape your career in data analytics then You must have a degree in Mathematics, Economics, Engineering, Statistics or in a field which emphasizes on statistical and analytical skills. You must know some data analytics tools or skills which are mentioned above like R, SQL, Tableau, Data Warehousing, Data visualization, Data mining and advanced Microsoft Excel. You must consider some good certifications in the above-mentioned skills.   You may also consider a master’s degree in the field of data analytics.Let us now take you through the scope of Data Analysis in 2022.What is the scope of data analytics in 2022?The world is witnessing a surge in demand for data analytics services. According to a report, it is expected that there will be 250,000 new openings in the Data Analytics field in 2022, which is almost 60% higher than the demand in 2019-20. To stay ahead of the competition, organizations are employing Data Analysts and the demand for experts in the field is only set to rise. According to another report published in 2019, there were 150000 jobs which were vacant in the Data Analytics sector because of lack of available talent. This is a lucrative field, and those professionals who have expertise and experience can easily climb to the top in a short time. A report by IBM predicts that by 2022, Data Science and Analytics jobs would grow to nearly 350,000.What are the sectors in which Data Science jobs are expected to grow in India in 2022?Though the need for data analytics is growing across every sector, there are a few sectors that are more in-demand than others. These include:Aviation sector: uses data analysis for pricing and route optimization.Agriculture sector: analyses data to forecast the output and pricing.Cyber security: Global companies are adopting data engineering and data analysis for anomaly and intrusion detection.Genomics: Data analytics is used to study the sequence of genomes. It is heavily used to diagnose abnormalities and identify diseases.Conclusion If you would like to enter the field of Data Analytics, there’s no time like now! Data is useless without the right professional to analyze it. Leading companies leverage the power of analytics to improve their decision making and fuel business growth, and are always looking to employ bright and talented professionals with the capabilities they need.  Opportunities are plentiful and the rewards are immense, so take the first step and start honing all the skills that can make you fulfil your dream!
8925
How To Become A Data Aanalyst In 2022?

In 2022, Data Analysis has become one of the core ... Read More