The technological revolution is transforming our lives at breakneck speed, dramatically altering the ways in which we work, learn, and even live together. Alongside the increasingly sophisticated use of big data, AI is undergoing exponential growth and finding new applications in an ever-increasing number of sectors, including security, the environment, research and education, health, culture, and trade.
AI can be a fantastic opportunity to achieve the goals set by the 2030 Agenda, but that means addressing the ethical issues it presents, without further delay. An opportunity, because its applications can help us to advance more rapidly towards the achievement of the Sustainable Development Goals (SDGs) — by allowing better risk assessment; enabling more accurate forecasting and faster knowledge-sharing; by offering innovative solutions in the fields of education, health, ecology, urbanism and the creative industries; and by improving standards of living and our daily well-being. But it is also a threat because automation and digitization create new imbalances. They can decrease diversity in cultural industries, disrupt the labor market, create job insecurity and increase disparities between those who have access to these new technologies and those who are deprived of them.
The computer science and Artificial Intelligence (AI) communities are starting to awaken to the profound ways that their algorithms will impact society and are now attempting to develop guidelines on ethics for our increasingly automated world.
The systems we require for sustaining our lives increasingly rely upon algorithms to function. More things are becoming increasingly automated in ways that impact all of us. Yet, the people who are developing the automation, machine learning, and the data collection and analysis that currently drive much of this automation do not represent all of us and are not considering all our needs equally.
However, not all ethics guidelines are developed equally — or ethically. Often, these efforts fail to recognize the cultural and social differences that underlie our everyday decision making and make general assumptions about both what a “human” and “ethical human behavior”.
As part of this approach, the US federal government launched AI.gov to make it easier to access all of the governmental AI initiatives currently underway. The site is the best single resource from which to gain a better understanding of the US AI strategy. The American AI initiative is guided by five principles that adopt AI ethical principles and enhance ethical standards as outlined in The U.S. Department of Defense (DOD) AI Strategy.
The European Union (EU) has carved out a ‘human-centric’ approach to AI that is respectful of values and principles. The guidelines aim to shed some light on the ethical rules when designing, developing, deploying, implementing or using AI products and services, as well as fostering the adoption of ethical standards and adopting legally binding instruments.
A wide range of initiatives have sought to establish ethical principles for the adoption of socially beneficial AI, for example, AI4People project established an ethical framework for a Good AI Society by five principles:
- AI must be beneficial to humanity.
- AI must also not infringe on privacy or undermine security.
- AI must protect and enhance our autonomy and ability to make decisions and choose between alternatives.
- AI must promote innovation and promote diversity as well as tolerance. The fourth principle concerns justice or fairness.
- AI systems that are understandable in terms of how they work (transparency).
AI is humanity’s new frontier. Once this boundary is crossed, AI will lead to a new form of human civilization. The guiding principle of AI is not to become autonomous or replace human intelligence. But we must ensure that it is developed through a humanist approach, based on values and human rights. We are faced with a crucial question: what kind of society do we want for tomorrow? The AI revolution opens exciting new prospects, but the anthropological and social upheaval it brings in its wake warrants careful consideration.
Corporations have revamped their take on diversity and championed initiatives to draw in talent, the outlook in fields such as AI remains troubling. The consequences of this diversity problem extends far, affecting the rising and challenging the ethics of unconsciously biased technologies. Therefore, we must find the best solutions to ensure that the development of AI is an opportunity for humanity, as it is our generation’s responsibility to pass down to the next a society that is just, more peaceful, and more prosperous.
THE IMPACT OF ARTIFICIAL INTELLIGENCE ON DIVERSITY
As organizations across the globe begin their artificial intelligence transformation, the leap into the AI era is going to be an order of magnitude more challenging than any technology businesses have grappled with. The challenges go well beyond technical matters to touch on issues of fairness, inclusion, and ethics. How can organizations ensure that products and internal systems developed with AI not only avoid bias but contribute to a more equitable employee experience, with improved business results to help create a better world?
Many people would argue that this debate should go even wider than AI, calling on us to embed ethics into every stage of our technology. And it means recognizing that the lack of diversity and inclusion in technology creates software and tools that excludes much of the population, just as there is deep and damaging bias.
Companies tend to think of “diversity” in terms of gender and nationality, as well as age. Unpacking further, diversity is multi-faceted with identifiers such as heritage, religion, and culture, which many typically overlook. When serving large populations with technology, companies must take all of these characteristics into consideration.
The consequences of not capturing the full scope of diversity are, to say the least, undesirable. Continuing this strand of thought, people of color face bias from facial recognition algorithms, as the technology is sometimes unable to detect facial features that stray from the cultural norm.
Chatbot programs prompted worries with responses that condone sexual abuse towards women. Take, for example, Apple Siri’s response to verbally abusive phrases: “I’d blush if I could.” In a publication titled that exact phrase, UNESCO denounced the flirty and submissive responses in Apple Siri’s program for reinforcing the image of women as complicit, bringing ethics questions to the forefront.
Currently, women make up only 12% of the A.I. field’s researchers, and this points to the importance of encouraging more to enter STEM fields. As overwhelmingly white male engineering teams build these AI systems, they inadvertently code in their biases of the world. Yes, the solution researchers seek is to improve by building more diverse teams that will represent as many cultural norms and backgrounds as possible.
Strands Labs endeavors for company-wide prioritization of diverse practices from the executive level down. Our Machine Learning (Nous) team reflects these results, comprised of five nationalities and 40% women. This is definite progress, but it is simply not enough.
Diversity in AI improves, but has further to go, according to the IBM survey.
Confidence in AI varies by gender and country, but generally speaking, AI pros have more faith in systems than the general population.
Ninety-one percent of artificial intelligence professionals say increased diversity is having a positive impact on AI technology, but opinions vary based on the country as well as gender, according to an IBM study.
The IBM Global Women in AI study was part of a broader look at diversity and honoring women in AI. Diversity of thought is critical to the development of AI, so the technology avoids bias and operates ethically. Confidence in AI systems largely depends on whether the technology is viewed as biased.
ARTIFICIAL INTELLIGENCE AND EQUITY FOR DISABILITY
AI can improve the lives of people with disabilities, such as smart devices supporting people with physical disabilities or sight loss. On the other hand, AI outputs can also reflect discriminatory biases present in the underlying data used to develop the algorithms.
AI and the potential for the emergence of bias and discrimination towards people with disabilities are something many people are thinking about.
The groups currently developing AI and data-driven systems are not typically a representation of the general population and although this has attracted widespread criticism in terms of gender diversity, there is also little representation of disability as well as socio-economic diversity in these development teams.
Public instances of the failure of AI such as the tragic incident of the self-driving car hitting and killing a woman pushing a bicycle across a junction at night shine a spotlight on the risks associated with machine decision-making in safety-critical situations and the additional concerns associated with not having a diversity of voices involved in the development process.
Although this theory that a self-driving car did not detect a person pushing a bicycle because of the wheels was discounted it did serve to highlight concerns about the safety of wheelchair users and although this concern was quickly addressed, it was, very rightly, a question that needed to be asked and required a satisfactorily reassuring answer.
To answer this question, Interdisciplinary committees must be formed, including legal scholars, ethicists, AI developers, medical and service providers, and advocates with disabilities to articulate clear criteria for developers and medical providers looking to harness the potential of AI to serve people with disabilities, including those whose disabilities are the result of aging, injury, or disease, and the caregivers who support these individuals.
Disability information is highly sensitive and not always shared, precisely because of the potential for discrimination; AI systems may not have explicit information about disability that can be used to apply established fairness tests and corrections. , some disabilities have relatively low rates of occurrence; in current algorithmic processes, these individuals can appear as data outliers rather than part of a recognizable subgroup.
IBM Accessibility Research has been exploring the topic of fair treatment for people with disabilities in artificial intelligence (AI) systems. It is essential that Machine Learning (ML) models uphold society’s moral and legal obligations to treat all people fairly, especially with respect to protected groups that have historically experienced discrimination. Biased human attitudes and wrong assumptions can lead to unfair treatment for people with disabilities in the world today.
We believe that the introduction of machine learning offers a real opportunity to improve this situation. However, people with disabilities are not a homogeneous group, each individual may have unique characteristics, so even with diverse, unbiased training data, and inequalities could still exist.
Machine learning models are optimized for good performance in typical cases, often at the expense of unusual or ‘outlier’ cases. Fairness will only happen with conscious attention to fairness and may require new or hybrid methods, to better accommodate outlier individuals.
The concerns of the World Institute on Disability (WID), many corporations and public and private sector organizations focus on the critical need for AI standards for privacy, ethics, and bias so that the full inclusion of persons with disabilities in the evolution of AI occurs. Many of us foresee compounded risks of AI use unless there is a commitment to and prioritization of privacy, ethics, and bias. For example:
- Models learning from biased data may reproduce and continue historical biases.
- Training data may under-represent outlier populations, which often include people with disabilities, and therefore thwart or deny full inclusion.
- Building inclusive data sets will prove essential for developing effective solutions, but also hold challenges such as requiring people to waive privacy rights.
- Data collection, machine learning training protocols and programming may not include representation from individuals with disabilities or professionals in the field with the appropriate knowledge to plan for full inclusion; and,
- Safety, security, bias, and accessibility may be a lower priority than speed.
So, while AI is a great opportunity, it is also a great threat to full inclusion for people with disabilities. Most researchers, accessibility experts, and disability rights organizations agree that building inclusive data sets are one of the greatest challenges for researchers and that AI accessibility should be a base level requirement for AI standards.
What is missing at the moment is a rigorous approach to AI ethics that is actionable, measurable and comparable across stakeholders, organizations, and countries.
What our societies all over the world need are shared and applicable ethical frameworks, to develop AI policies, regulations, technical standards, and business best practices that are environmentally sustainable and socially preferable.
With great power, the saying goes, comes great responsibility. As artificial intelligence (AI) technology becomes more powerful, many groups are taking an interest in ensuring its responsible use. The questions that surround AI ethics can be difficult, and the operational aspects of addressing AI ethics are complex. Fortunately, these questions are already driving debate and action in the public and commercial sectors. Organizations using AI-based applications should take note.
We recommend that persons with expertise in disability culture and accessibility be engaged early in the AI standards development, as well as those with expertise in recognizing and addressing implicit bias and those who can set guidelines for developing inclusive data sets. The inclusion of those with appropriate expertise will go far to achieve full inclusion of persons with disabilities in future data sets.