3 questions with Michael Kearns: Designing socially aware algorithms and models



The first Amazon Web Services (AWS) Machine Learning Summit on June 2 will bring together customers, developers, and the science community to learn about advances in the practice of machine learning (ML). The event, which is free to attend, will feature four audience-focused tracks, including the Science of Machine Learning.

The science track is focused on the data science and advanced practitioner audience, and will highlight the work AWS and Amazon scientists are doing to advance machine learning. The track will comprise six sessions, each lasting 30 minutes, and a 45-minute fireside chat.

In the coming weeks, Amazon Science will feature interviews with speakers from the Science of Machine Learning track. For the second edition of the series, we spoke to Michael Kearns, Amazon Scholar, and a professor of computer and information science at the University of Pennsylvania.

Kearns along with his fellow University of Pennsylvania computer science professor and Amazon Scholar Aaron Roth, are the authors of the book, “The Ethical Algorithm: The Science of Socially Aware Algorithm Design”, which first published in 2019.  The book explores  the science of designing algorithms that embed social norms such as fairness and privacy into their code to protect humans from the unintended impacts of algorithms.

Kearns is a founding director of the Warren Center for Network and Data Sciences — a research effort that seeks to  understand the role of data and algorithms in shaping interconnected social, economic and technological systems. He was also recently elected to the National Academy of Sciences.

Q. What is the subject of your talk going to be at the ML Summit?

I’ll be covering recent research in the machine learning community to design more “ethical” algorithms and models: approaches that obey important social norms like fairness, explainability and privacy, while still allowing us to harness the benefits of artificial intelligence and machine learning.

For example, I’ll discuss the rich algorithmic toolkit known as differential privacy, which is a powerful method for adding noise and randomness to computations in a way that allows us to develop machine learning models while providing strong guarantees of individual privacy.

There are also recent machine learning algorithms based on game theory that can be useful in enforcing group fairness notions pertaining to race or gender. At its essence, game theory is a mathematical framework for reasoning about collective outcomes in systems where individuals interact with one another. Generative adversarial networks (GANs) can be used to frame algorithms as a game between a generator who wants to keep the synthetic dataset as close to the real dataset as possible, and a discriminator designed to point out differences.

Q. Why is this topic especially relevant within the science community today?

Even the casual observer will have noticed the rising societal concerns over the potential harms and misuses of artificial intelligence and machine learning, which receive widespread coverage nowadays.

While it’s typical for such reports to call for stronger technology-related laws and regulations, the science I will cover during my talk points the way to an alternative, complementary solution: the design of socially aware algorithms and models that are “better behaved” to begin with.

Some of this science is relatively mature — we’ve already talked about differential privacy. Other approaches are more nascent, such as the efforts to make ML models more “interpretable” or “explainable”. We need to explore these topics further to develop a deeper behavioral understanding of how people use and interpret predictive models.

Q. As we forge a new science of socially aware algorithm design, what are three developments that you find exciting?

The underlying science … points the way to a new set of algorithmic techniques that balance our usual objectives of accuracy and utility with at least some of the major societal concerns.

One is the underlying science itself, which really points the way to a new set of algorithmic techniques that balance our usual objectives of accuracy and utility with at least some of the major societal concerns around AI and ML.

A second thing I’m excited about is that we’re starting to see adoptions at scale of this science in real applications. Examples include the 2020 U.S. Census’ adoption of differential privacy, and AWS’s own new fair and interpretable ML service, Amazon SageMaker Clarify.

Finally, I’m excited about the truly interdisciplinary community that has arisen around these issues in the last decade, which includes and needs ML researchers like myself, legal and regulatory experts, policymakers, social scientists, civil liberty groups, and even philosophers. It makes working on these topics fun, exciting, educational – and ultimately meaningfully impactful.

You can learn about Kearns’s research here and watch him speak at the virtual AWS Machine Learning Summit on June 2 by registering for the event at the link below.





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart