reMARS revisited: How Amazon builds AI-enabled perception for robots


[Editor’s note: The International Conference on Intelligent Robots and Systems (IROS) is taking place this week in Kyoto, Japan. IROS focuses on future directions in robotics, and the latest approaches, designs, and outcomes, so we’re sharing this interview and presentation on how Amazon builds artificial intelligence-enabled robots.]

In June 2022, Amazon re:MARS, the company’s in-person event that explores advancements and practical applications within machine learning, automation, robotics, and space (MARS), took place in Las Vegas. The event brought together thought leaders and technical experts building the future of artificial intelligence and machine learning, and included keynote talks, innovation spotlights, and a series of breakout-session talks.

Now, in our re:MARS revisited series, Amazon Science is taking a look back at some of the keynotes, and breakout session talks from the conference. We’ve asked presenters three questions about their talks, and provide the full video of their presentation.

Related content

An advanced perception system, which detects and learns from its own mistakes, enables Robin robots to select individual objects from jumbled packages — at production scale.

On June 27, Bhavana Chandrashekhar, software development manager with Robotics AI, presented the talk, “How Amazon builds AI-enabled perception for robots”. The session focused on how Amazon builds intelligent robots, with a focus on Robin, the artificial intelligence-enabled robot that is deployed in some fulfillment centers.

What was the central theme of your presentation?

This talk goes over the challenges in building an AI-enabled robot for robotic manipulation. We illustrate these challenges with Robin, the package manipulation robot, to talk about the scale of perception problems with manipulating packages and items at Amazon.

In what applications do you expect this work to have the biggest impact?

While this is a robotic manipulation application, the talk spans across perception, machine learning, deep learning, and continual learning concepts that are generally applicable in other domains within and outside of robotics.

What are the key points you hope audiences take away from your talk?

An appreciation of the perception challenges in manipulation, system level behaviors in robotics, and insights into the scale of robotics problems we solve at Amazon.

How Amazon builds AI-enabled perception for robots





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart