For enquiries call:

Phone

+1-469-442-0620

April flash sale-mobile

HomeBlogData ScienceA Guide to Using AI Responsibly

A Guide to Using AI Responsibly

Published
05th Sep, 2023
Views
view count loader
Read it in
10 Mins
In this article
    A Guide to Using AI Responsibly

    "The artificial intelligence (AI) that we develop is impacting people directly or indirectly. And often, they don’t deserve the consequences of those impacts, nor have they asked for it. It is incumbent upon us to make sure that we are doing the right thing”.

    - Dr. Anthony Franklin, Senior Data Scientist and AI Engineer, Microsoft


    Digitally addressing a live global audience in a recent webinar on the topic of ‘Responsible AI’, Dr. Anthony Franklin, a senior data science expert and AI evangelist from Microsoft, spoke about the challenges that society faces from the ever-evolving AI and how the inherent biased nature of humans is reflected in technology.

    Drawing from his experience in machine learning, risk analytics, analytics model management in government as well as data warehouse, Dr. Franklin shed light on the critical need to incorporate ethics in developing AI. Citing examples from various incidents that have taken place around the world, Dr. Franklin emphasized why it is critical for us to have an uncompromising approach towards using AI responsibly. He talked about the human (over)indulgence in technology, the challenges that society faces from the ever-evolving AI and how the inherent biased nature of humans is reflected through technology.

    The purpose of the talk and this article is to help frame the debate on responsible AI with a set of principles we can anchor on, and a set of actions we can all take to advance the promise of AI in ways that don’t cause harm to people. In this article, we present key insights from the webinar along with the video for you to follow along.

    KnowledgeHut webinar on Responsible AI by Dr. Anthony Franklin, Microsoft

    What is the debate about?

    These are times when we can expect to see policemen on the streets wearing AI glasses, viewing, and profiling the public. Military organizations today, can keep an eye on the public. Besides, a simple exercise of googling the word CEO, would result in pages and pages showing white men.

    Police using AI glasses for public surveillance in China
    These are just some of the examples of the unparalleled success we have achieved in technology coupled with the fact that the same technology has overlooked the basic ethics, moral and social.

    Responsible AI is a critical global need

    In a recent study conducted from among the top ten technologically advanced nations, nearly nine of ten organizations across countries have encountered ethical issues resulting from the use of AI.

    Responsible AI is a critical global need

    Artificial intelligence has captured our imagination and made many things we would have thought impossible only a few years ago seem commonplace today. But AI has also raised some challenging issues for society writ large. We are in a race to advance AI capabilities and everything is about collecting data. But, what is being done with the data?

    Advancements in AI are different from other technologies because of the pace of innovation and its proximity to human intelligence – impacting us at a personal and societal level.

    Impacting us at a personal and societal level.
    While there remains no end to this ever-ending road of development, the need for us to ensure an equally powerful framework has increased even more. The need for a responsible AI is a critical global need.

    What developers are saying about ethics in AI

    Stack Overflow carried out a couple of anonymous developer focused surveys in 2018. Some of the responses are a clear indication of how the machine is often so powerful. While we wish the answers were all "No", the actual answers are not too surprising.

    1. What would the developers do if asked to write a code for an unethical purpose?

    The majority (58.5 percent) stated they would clearly decline if they were to be approached to write code for an unethical purpose. Over a third (37 percent), however, said they would do if it met some specific criteria of theirs.

    2. Who is ultimately responsible for the code which accomplishes something unethical?

    When asked with whom the ultimate responsibility lies if their code were to be used to accomplish something unethical, nearly one fifth of the developers acknowledge that such a responsibility should lie with the developer who wrote the code. 23 percent of the developers stated that this accountability should lie with the person who came up with the idea. The majority (60 percent), however, felt that the senior management should be responsible for this.

    3. Do the developers have an obligation to consider the ethical implications?

    A significant majority (80 percent) acknowledged that developers have the obligation to consider ethical implications. 

    Though in smaller numbers, the above studies show the ability of the developers to get involved in unethical activity and the tendency to brush off accountability. Thus, there is a great and growing need not just for developers, but also for the rest of us to work collectively to change these numbers.

    The six basic principles of AI

    Though ambiguous, the principles attached with the ethics of AI remain very much tangible. Following are the six basic principles of AI:

    The six basic principles of AI

    1. Fairness

    Fairness (noun)
    the state, condition, or quality of being fair, or free from bias or injustice; evenhandedness

    Discrimination

    One of the many services which Amazon provides today includes the same-day-shipping policy. The map below shows the reach of the policy in the top 6 metropolitans in the US.

    Discrimination

    Source: Bloomberg   

    In the city of Boston, one can see the gaps, the places where the service is not provided. Coincidentally, these areas turned out to be areas inhabited by individuals belonging to the lower economic strata. In defence, the Amazon stated that the policy was meant primarily for regions with denser Amazon users. 

    Whichever way this is seen, the approach still ends up being discriminatory.

    We see examples of bias in search as well. When we search for “CEO” in Bing, we see that all pictures are pictures of mostly white men, creating the impression that there are no women CEOs.

    CEOs

    Racism

    We see examples of bias across different applications of AI. An image of an Asian American was submitted for the purpose of renewing the passport. After analysing the subject, the application’s statement read “Subjects eyes are closed”.

    Racism

    This highlights the unintentional, but negatively impactful working of a data organization. It further goes on to show how an inherent bias held by humans, transcends into the technology we make.

    An algorithm widely used in US hospitals to allocate healthcare to patients has been systematically discriminating against black people, a sweeping analysis has found.

    Health-care Algorithms
    The study, published in Science in October 2019, concluded that the algorithm was less likely to refer black people than white people who were equally sick, to programmes that aim to improve care for patients with complex medical needs. Hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the United States each year.

    As a result, millions of black people have not been able to get equal medical treatment. To make things worse, data suggests that in some way or the other, the algorithms have been set up to make money.

    In 2015, Google became one of the first to release a facial recognition programme. The system recognized the Caucasians perfectly well, but the same system identified a black person with an ape.

    The New York Times

    These examples of bias in technologies are not isolated from the society we live in. The society we live in has different forms of biases that may not consistent with a corporation’s values, but these biases may already be prevalent in their data sets.

    With the widespread use of AI and statistical learning, such enterprises are at serious risk not only of spreading but also amplifying these biases in ways that they do not understand.

    These examples demonstrate gross unfairness on multiple fronts, making it necessary for organizations to have a more diverse data in general.

    2. Reliability and Safety

    Reliability (noun)

    the ability to be relied on or depended on, as for accuracy, honesty, or achievement.

    Safety (noun)

    the state of being safe; freedom from the occurrence or risk of injury, danger, or loss. the quality of averting or not causing injury, danger, or loss.

    In the case of an autonomous vehicle, when can we as a consumer be 100% sure of our safety? Or can we ever be? How many miles does a car have to cover or how many people are to lose their lives before the assurance of the rest?

    In the case of autonomous vehicles, how can we as consumers be 100 percent sure of our safety? Or can we ever be? How many miles does a car have to cover or how many people are to lose their lives before the assurance of the rest? These are just a few of the questions a company must answer before establishing themselves as a reliable organization.

    A project from scientists in the UK and India shows one possible use for automated surveillance technology to identify violent behavior in crowds with the help of camera-equipped drones.

    In a paper titled “Eye in the Sky,” the researchers used uses a simple Parrot AR quadcopter (which costs around $200) to transmit video footage over a mobile internet connection for real-time analysis. A figure from the paper showing how the software analyzes individuals poses and matches them to “violent” postures. The question is: how will this technology be used, and who will use it?

    Researchers working in this field often note there is a huge difference between staged tests and real-world use-cases. Though this system is yet to prove itself, it is a clear illustration of the direction contemporary research is going.

    Using AI to identify body poses is a common problem, with big tech companies like Facebook publishing significant research on the topic. Many experts agree that automated surveillance technologies are ripe for abuse by law enforcement and authoritarian governments.

    3. Privacy and security

    Privacy (noun)

    the state of being apart from other people or concealed from their view; solitude; seclusion:

    the state of being free from unwanted or undue intrusion or disturbance in one's private life or affairs; freedom to be let alone:

    Security (noun)

    freedom from danger, risk, etc.; safety.

    freedom from care, anxiety, or doubt; well-founded confidence.

    something that secures or makes safe; protection; defense.

    Strava’s heat map revealed military bases around the world and exposed soldiers to real danger – this is not AI per se, but useful for a data discussion. A similar instance took place in Russia, too.

    The iRobot’s latest Roomba’s i7+ Robovac maps users’ homes to let them customize the cleaning schedule. An integration with Google Assistant lets customers give verbal commands like, “OK Google, tell Roomba to clean the kitchen.” - this is voluntary action and needs user’s consent.

    Roomba’s i7+ Robovac maps users’ homes to let them customize the cleaning schedule

    In October 2018, the company admitted it had exposed the personal data of around 500,000 Google+ users, leading to the closure of the platform. It also announced it was reviewing access to Gmail by third-party companies after it was revealed that many developers were reading and analyzing users’ personal mail for marketing and data mining.

    A 2012 New York Times article, spoke about a father who found himself in the uncomfortable position of having to apologize to a Target employee. Earlier, he had stormed into a store near Minneapolis and complained to the manager that his daughter was receiving coupons for cribs and baby clothes in the mail. It turned out that Target knew his teen daughter better than he did. She was pregnant and Target knew this before her dad did.

    By crawling the teen’s data, statisticians at Target were able to identify about 25 products that, when analysed together, allowed them to assign each shopper a “pregnancy prediction” score. More importantly, they could also estimate her due date to within a small window, so they could send coupons timed to very specific stages of her pregnancy.

    Forbes

    There was another instance reported in Canada of a mall using facial recognition software in their directories June to track shoppers' ages and genders without telling them.

    4. Inclusiveness

    Inclusiveness (adjective)

    including or encompassing the stated limit or extremes in consideration or account (usually used postpositively)

    including a great deal, or encompassing everything concerned; comprehensive

    In the K.W vs Armstrong case, the plaintiffs were vulnerable adults living in Idaho, facing various psychological and developmental disabilities. They complained to the court when the Idaho Department of Health and Welfare reduced their medical assistance budget by a whopping 42%.

    The Idaho Department of Health and Welfare claimed that the reasons for the cuts were “trade secrets” and refused to disclose the algorithm it used to calculate the reductions.

    K.W. v. Armstrong plaintiff, Christie Mathwig

    Once a system is found to be discriminatory or otherwise inaccurate, there is an additional challenge in redesigning the system. Ideally, government agencies should develop an inclusive redesign process that allows communities affected by algorithmic decision systems to meaningfully participate. But this approach is frequently met with resistance.

    5. Transparency

    Transparency (adjective)

    having the property of transmitting rays of light through its substance so that bodies situated beyond or behind can be distinctly seen. 

    admitting the passage of light through interstices. 

    so sheer as to permit light to pass through; diaphanous. 

    easily seen through, recognized, or detected

    A company in New Orleans assisted the police officials to predict the individuals and their likelihood of committing crimes. This is the example of the usage of predictive analytics for policing strategies, carried out secretively.

    In the Rich Caruana case study, 10 million patients data, and 1000’s of features were used to train a model on the data to predict the risk of pneumonia and decide whether patients must be sent to hospital. But was this model safe to deploy and use on real patients? Was the test data sufficient to make accurate predictions?

    Unfortunately, a bunch of different machine learning models had been used to train an accurate black box, without knowing what was inside. Multitask neural net was thought to be the most accurate, but was the approach safe?

    The pattern in the data, strictly speaking, was accurate. The good news was that the treatment was so effective that it lowered the risk of dying compared to the general population. However, the bad news was that if we used this model to make decisions about whether to admit the patient to the hospital, it would be dangerous to asthmatics and hence, not at all safe to use.

    Not only is this an issue of safety, but also a case of violation of transparency. The key problem is that there are bad patterns we don’t know about. While neural net is more accurate and can learn things fast, one doesn’t know everything that the neural net is using. We really need to understand the model before we deploy it.

    Now, through a technique called Generalized Additive Models, whereby the influence of individual attributes in the training data can be independently measured, a new model has been trained where the outputs are completely transparent, but actually improved performance over the old model.

    Asthmatics were now being sent home sooner because they were rushed to the front of the line as soon as they arrived at the hospital. Faster and more targeted care led to better results. And all the model learned from were the results.

    In another instance, one of the tools used by the New Orleans Police Department to identify members of gangs like 3NG and the 39ers came from the Silicon Valley company Palantir. The company provided software to a secretive NOPD program that traced people’s ties to other gang members, outlined criminal histories, analyzed social media, and predicted the likelihood that individuals would commit violence or become a victim.

    As part of the discovery process in the trial, the government turned over more than 60,000 pages of documents detailing evidence gathered against him from confidential informants, ballistics, and other sources — but they made no mention of the NOPD’s partnership with Palantir.

    6. Accountability

    Accountability (adjective)

    subject to the obligation to report, explain, or justify something; responsible; answerable. capable of being explained; explicable; explainable.

    Like in the example of autonomous vehicles, in case of any mishap, where does the accountability lie? Who is to be blamed for the loss of lives or any sort of destruction in a driverless car?

    With driverless cars, the question remains: Who is to blame?

    It appears that the more advanced the technology, the faster it is losing its accountability. Be it a driverless car crashing or a robot killing a person, the question remains: who is to blame.

    Whom does one sue if I were to get hit by a driverless car? What if a medical robot gives a patient the wrong drug? What if a vacuum robot sucks up one's hair while they are napping on the floor? And can a robot commit a war crime? Who gets to decide whether a person deserves certain treatment in an algorithm-based health care policy? Is it the organization which developed it or the developer who made it? There is a clear case of lack of accountability in such situations.

    Liability of automated systems, the debate continues.
    The key word in the above-mentioned principles is impact. The consequence of any AI programming, intentional or unintentional, leaves a strong impact.

    The responsible AI lifecycle

    Both the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) published ethics guidelines for computer scientists in the early 1990s. More recently, we have seen countless social scientists and STS researchers sounding the alarm about technology’s potential to harm people and society.

    To turn talk about responsible AI into action, organizations need to make sure that their use of AI fulfils several criteria. After defining the basic AI principles, an organization can develop a prototype. But they must be open to change even after launching what they assume to be the most fool-proof AI service.

    Microsoft’s Responsible AI Lifecycle is built on six key principles, namely:

    1. Define: Define the objectives, data requirements and responsible metrics.
    2. Envision: Consider the consequences and potential risks by continually analyzing and improving.
    3. Prototype: Build prototypes based on data, models and experience, and test frequently.
    4. Build: Build and integrate AI according to responsible metrics and trade-offs.
    5. Launch: Launch only after diverse ring-testing with escalation and recovery plan
    6. Evolve: By continuously analyzing and improving.

    Responsible AI Lifecycle. Source: Microsoft

    Microsoft is leading the way with detailed guidelines to help teams put responsible AI into practice. Their Guidelines for Human-AI Interaction recommend best practices for how AI systems should behave upon initial interaction, during regular interaction, when they’re inevitably wrong, and over time. They are to be used throughout the design process as existing ideas are evaluated, new ideas are being brainstormed, and collaboration undertaken across multiple disciplines in creating AI.

    In addition, there are several types of guidelines given to engineering teams including conversational AI guidelines, inclusive design guidelines, an AI fairness checklist, and an AI security engineering guidance.

    All guidelines are designed to help teams anticipate and address potential issues throughout the software development lifecycle to mitigate security, risks, and ethics issues.

    Principles to practices

    AI is already having an enormous and positive impact on healthcare, the environment, and a host of other societal needs. These rapid advances have given rise to an industry debate about how the world should (or shouldn’t) use these new capabilities.

    As these systems become increasingly important to our lives, it is critical that when they fail that we understand how and why, whether it is inherent design of a system or the result of an adversary.

    In conclusion, Dr. Franklin emphasized the need for enterprises to understand how bias can be introduced and affect recommendations. Attracting a diverse pool of AI talent across the organization is critical to develop analytical techniques to detect and eliminate bias, he stressed.

    We hope Dr. Franklin's webinar and this article have helped frame the debate on responsible AI and provided us with a set of principles we can anchor on, and a set of actions we can take to advance the promise of AI in ways that don’t cause harm to people.

    Profile

    KnowledgeHut .

    Author

    KnowledgeHut is an outcome-focused global ed-tech company. We help organizations and professionals unlock excellence through skills development. We offer training solutions under the people and process, data science, full-stack development, cybersecurity, future technologies and digital transformation verticals.

    Share This Article
    A Guide to Using AI Responsibly

    A Guide to Using AI Responsibly

    Select
    Your Message (Optional)

    Upcoming Data Science Batches & Dates

    NameDateFeeKnow more
    Course advisor icon
    Course Advisor
    Whatsapp/Chat icon