reMARS revisited: Human-like reasoning for an AI


In June 2022, Amazon re:MARS, the company’s in-person event that explores advancements and practical applications within machine learning, automation, robotics, and space (MARS), took place in Las Vegas. The event brought together thought leaders and technical experts building the future of artificial intelligence and machine learning, and included keynote talks, innovation spotlights, and a series of breakout-session talks.

Now, in our re:MARS revisited series, Amazon Science is taking a look back at some of the keynotes, and breakout session talks from the conference. We’ve asked presenters three questions about their talks, and provide the full video of their presentation.

On June 24, Alexa AI-Natural Understanding employees Craig Saunders, director of machine learning, and Devesh Pandey, principal product manager, presented their talk, “Human-like reasoning for an AI”. Their presentation focused on how Amazon is developing human-like reasoning for Alexa, including how Alexa can automatically recover from errors such as recognizing “turn on lights” in a noisy environment (instead of “turn off lights”) when the lights are already on.

What was the central theme of your presentation?

Illustrating the different challenges of enabling human-like reasoning for an assistant such as Alexa. What seems simple on the surface has a lot of subtlety underneath, and we walked through some examples to show how these manifest themselves and discussed some of the research and product directions we see in solving them.

In what applications do you expect this work to have the biggest impact?

It’s not a single application, improved reasoning and generalized intelligence will provide a step change, rather than incremental improvement across many areas. For Alexa it is one of the key pillars in enabling our ambient intelligence vision.

What are the key points you hope audiences take away from your talk?

A better appreciation for the challenges involved in advancing generalized intelligence for assistants such as Alexa in order to provide more utility for customers. A central theme in some ways is that “it’s much harder than it looks”. To bring utility to customers across a range of applications, you have to make large advances in generalized AI to obtain the consistency and accuracy that customers want and expect, and link them to existing signals and context to deliver impact.

The positive news is that research is advancing in many areas that demonstrate it’s possible, from large language models (such as the Alexa Teacher Model), to end-to-end learning paradigms, self-learning, knowledge graph construction, ambiguity resolution, fact checking, and intelligent graph query are some areas where significant progress has been made in recent years. Challenges remain in advancing and combining these to a next generation user product at scale, but it’s a very exciting time to be working on it!

Amazon re:MARS 2022: Human-like reasoning for an AI





    Source link

    We will be happy to hear your thoughts

    Leave a reply

    Rockstary Reviews
    Logo
    Shopping cart