3 questions: Prem Natarajan on issues of AI fairness and bias



A year ago, Amazon and the National Science Foundation (NSF) announced a $20 million collaboration to fund academic research on fairness in AI over a three-year period. Recently, Erwin Gianchandani, deputy assistant director for Computer and Information Science and Engineering at NSF, discussed the work of the first ten recipients of the program’s grants. Here, Prem Natarajan, Alexa AI vice president of natural understanding, and the Amazon executive who helped launch the collaboration with NSF, discusses the next cycle of upcoming proposals from academic researchers, his work with the Partnership on AI, and what can be done to address bias in natural language processing models.

The 2020 award cycle for the Fairness in AI program in conjunction with the NSF recently launched. Full proposals are due by July 13th. What are you hoping to see in the next round of proposals?

We collaborated with the NSF to launch the Fairness in AI program with the goal of promoting academic research in this important aspect of AI. Our primary objective for engaging with academia on issues related to fairness and transparency in AI is to get many different and diverse perspectives focused on the challenge. The teams selected by NSF in the first round are addressing a variety of topics – from principled frameworks for developing and certifying fair AI, to domain-focused applications such as fair recommender systems for foster care services. To that end, I hope that the second round will build upon the success of the first round by bringing an even greater diversity of perspectives on definitions and perceptions of fairness. Without such diversity the entire field of research into fair AI will become a self-defeating exercise.

Another hope I have for the second round, and indeed for all rounds of this program, is that it will drive the creation of a portfolio of open-source artifacts – such as data sets, metrics, tools, and testing methodologies – which all stakeholders in AI can use to promote the use of fair AI. Such readily available artifacts will make it easier for the community to learn from one another, promote the replication of research results, and, ultimately, advance the state of the art more rapidly. Put differently, we hope that open access to the research under this program will form a rising tide that lifts all boats. It also seems natural that methodologies for fairness will benefit from broad and inclusive discussion across relevant academic and scientific communities.

The deadline for this next round of proposal submissions is July 13th. We hope that the response to this round will be even stronger than for the first. NSF selects the recipients, and I am sure NSF’s reviewers are looking forward to a summer of interesting reading!

You are Amazon’s representative on the Partnership on AI (PAI) board of directors. This unique organization has thematic pillars related to safety-critical AI; fair, transparent and accountable AI; AI labor and the economy; collaborations between AI systems and people; social and societal influences of AI; and AI and social good. It’s an ambitious, broad agenda. You’re fairly new in your role with PAI; what most excites you about the work being done there?

The most exciting aspect of the Partnership on AI is that it is a unique multi-sector forum where I get to listen to and learn from the incredible diversity of perspectives – from industry, academia, non-profits, and social justice groups. PAI today counts amongst its members about 59 non-profits, 24 academic institutions, and 18 industrial organizations. While I joined the board just a few months ago, I have already attended several meetings and participated in discussions with other PAI members as well as PAI staff. While every member has their own unique perspective on AI, it’s been really interesting and encouraging to see that we all share the same values and many of the same concerns. It should be of no surprise that the issue of equity is top of mind with a concomitant focus on fairness considerations.

Alexa & Friends Twitch show features Prem Natarajan

Earlier this month, Alexa evangelist Jeff Blankenburg interviewed Prem Natarajan live on the ‘Alexa & Friends‘ Twitch show. In the video, they discuss recent advances in natural understanding , and how those advancements translate into better experiences for customers, developers and third-party device manufacturers.

From a technical perspective, I am excited by the number and quality of research initiatives underway at PAI. Many of these initiatives are of critical importance to the future development of the field of AI. Let me give you a couple of examples.

One is the area of fairness, accountability and transparency. There are several projects underway in this area, but I will mention one that to me exemplifies the kind of work that an organization like PAI can do. PAI researchers interviewed practitioners at twenty different organizations and performed an in-depth case study of how explainable AI is used today. This kind of research is very important to AI practitioners because it gives them a referential basis to assess their own work and to identify useful areas for future contributions.

Another example is ABOUT ML, which is focused on developing and sharing best practices as well as on advancing public understanding of AI. A couple of years ago some researchers had proposed the development of an AI model scorecard, along the lines of the nutritional information you get on the back of most food items we buy today. The scorecard would describe the attributes of the data used to train the models, the way in which it was tested, etc. The motivation behind the scorecard is to give other developers or model builders a sense of the strengths and limitations of the model, so they can better estimate and address potential weaknesses in the model for their target use cases. ABOUT ML goes well beyond such a scorecard, focusing on documentation, provenance of data and code artifacts, and other critical attributes of the model development process. Ultimately, only multisector organizations like PAI can successfully drive this kind of initiative, bringing together people across organizations and sectors.

Lastly, there’s an education role that PAI serves that I believe is unique, serving as the bridge between AI technologists and other stakeholders within society, making sure AI technologists are appropriately factoring in the perspectives and concerns of the other stakeholders within society. Some examples here include PAI’s collaborative work with First Draft, a PAI Partner, to help technologists and journalists at digital platforms address growing issues around manipulated media. PAI also helps those stakeholders understand more about how AI technology works, its strengths and its limitations.

You oversee Alexa’s natural understanding team. Natural language processing models have drawn criticism for capturing common social biases with respect to gender and race. A large body of work is emerging related to bias in word embedding and classifiers, and there are many proposals for countermeasures. Can you describe the challenge of bias in NLP models, and give us insight into some of the countermeasures you think are, or could be, effective?

A word embedding is a vector of real numbers representing that word; the core idea is that words with similar meanings map to vectors that are “close” to each other. Word embeddings have become a central feature of modern NLP. While embeddings can be computed using a variety of different techniques, deep learning techniques have proven to be tremendously effective at numerically representing the semantics of a word and concepts, etc. Today, deep learning based embeddings are used for all kinds of processing, from named entity recognition, to question answering, and natural language generation. As a result, the semantics that these embeddings encode greatly influence how we interpret text, the accuracy of those interpretations, and the actions we take in response to those interpretations.

Bias can also manifest in other ways because any system that is based on data can exhibit a majoritarian bias to it.

Prem Natarajan, Alexa AI VP of natural understanding

As word embeddings became prevalent, researchers naturally started looking into their fragilities and shortcomings. One of those fragilities is that the embeddings derive and encode meaning from context, which means that the meaning of a word is largely controlled by the different contexts in which that word is observed in the training data. While that seems like a reasonable basis for inferring meaning, it leads to undesirable consequences. My friend Kai-Wei Chang at UCLA is one of the early investigators of bias in NLP and he uses the following example: take the vector for doctor and you subtract the vector for man; when you add the vector for woman, you should in principle get the vector for doctor again, or a female doctor. But instead the resulting vector is close to the vector for ‘nurse.’ What this example shows is that the latent biases in human-generated text get encoded into the embeddings. One example of a system that is affected by these biases is natural language generation. Many studies have shown that such biases can result in the generation of text that exhibits the same biases and prejudices as humans, sometimes in an amplified manner. Left unmitigated, such systems could reinforce human biases and stereotypes.

Bias can also manifest in other ways because any system that is based on data can exhibit a majoritarian bias to it. So, for example, different groups in different parts of the world may speak the same language with different dialects, but the most frequent dialect will likely see the best performance only because it forms the major proportion of the training data. But we don’t want dialect or accent to determine how well the system will work for an individual. We want our systems to work equally well for everyone, regardless of geography, dialect, gender, or any other irrelevant factor.

Methodologically, we counter the impact of bias by using a principled approach to characterize the dimensions of bias and associated impact, and by developing techniques that are robust to these biasing factors. For example, it stands to reason that speech recognition systems should ignore parts of the signal that are not useful for recognizing the words that were spoken. It shouldn’t really matter whether the voice is male or female, only the actual words should. Similarly for natural language understanding, we want to be able to understand the queries of different groups of people regardless of the stylistic or syntactic variations of the language used. Scientists at Amazon and elsewhere are exploring a broad variety of approaches such as de-biasing techniques, adversarial invariance, active learning, and selective sampling. Personally, I find the adversarial approaches to both testing and to generating bias or nuisance invariant representations most appealing because of their scalability, but in the next few years, we will all find out what works best for different problems!





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart