The science behind the Halo Body feature



With Amazon Halo, a health and wellness membership, individuals can measure their own body fat percentage (BFP) and track it through a personalized 3D model. This level of scanning is usually only possible with expensive and sophisticated machines, but Halo’s Body feature makes it available to anyone with a smartphone via the Halo app. To achieve this, Amazon scientists used ideas from computer vision, computer graphics, artificial intelligence, and creative problem-solving.

The science and engineering team had to deal with two challenges when developing the Body feature: first, estimate BFP from smartphone photos without any other direct measurements; second, create a personalized 3D model of the user’s body.

Solutions to both problems involved a combination of deep neural networks, which are capable of learning tasks by identifying patterns in large amounts of data, and classical algorithms in computer vision and computer graphics.

Estimating body fat percentage from images

Estimating body fat percentage is a complex process. At-home smart scales do not directly measure body fat, but analyze electrical resistance in the body and use equations to convert that to BFP. Based on how hydrated you are throughout the day, this electrical resistance can fluctuate wildly, leading to high errors in BFP.

Amazon Halo adds Movement Health feature

Movement Health is based on functional fitness, which is your body’s readiness to execute everyday movements like bending, reaching, lifting, twisting, pulling, pushing, and walking. Learn more about how Movement Health works.

Commercial-grade measurement tools, such as hydrostatic dunk tanks and air displacement plethysmography, measure body volume that is subsequently converted to BFP and are more accurate than at-home smart scales, but require access to a trainer or special facility, and each scan costs money. Dual-energy X-ray absorptiometry (DXA) is considered the clinical gold standard for body composition and widely used, but these machines require a prescription and can cost as much as $80 per scan.

“All these different methods try to estimate BFP through indirect measures,” said Amit Agrawal, an Amazon principal scientist who has worked on Amazon Halo. “Borrowing the idea of indirect measurement, we challenged ourselves to build a computer vision system that can accurately predict BFP via visual features measured from images such as overall body shape and details of the body such as muscle definition and fat folds.”

We challenged ourselves to build a computer vision system that can accurately predict BFP via visual features measured from images such as overall body shape and details of the body such as muscle definition and fat folds.

The solution: develop a technology utilizing convolutional neural networks (CNN), a class of deep neural networks commonly applied to analyzing images, and semi-supervised learning, which is a machine learning approach to train models with limited ground truth.

The input for the machine learning model is the photos captured from the smartphone, and the output is a number that tells you the body fat percentage. To train the model, it would typically be necessary to collect photos from many users in different scanning conditions and their actual BFP. The problem: it would be too expensive to use the DXA method.

Instead, the team pre-trained a CNN to learn a representation of the human body, which can extract discriminative features from images. The network analyzes the overall shape and details of the body from the images to extract visual features that are relevant to body composition. Then, data from actual DXA scans is used to fine-tune this network via semi-supervised learning.

A recent clinical study, whose results haven’t been published yet, determined that Body is nearly twice as accurate as smart scales in measuring BFP when using DXA as the ground truth.

Building personalized 3D avatars from images

Until recently, if you wanted to have a virtual model of your own body, you would have to stand in a room-sized 3D scanner with multiple synchronized high-end cameras around you. These expensive systems are used for applications in animation and gaming, but aren’t generally available to consumers.

Scientists on the Halo team undertook the ambitious goal of developing a tool capable of producing a 3D virtual representation of a customer’s body from a simple set of smartphone photos.

To do that, they trained a deep neural network which estimates the shape and pose parameters of the underlying statistical model from the captured photos. Again, the key challenge was acquiring the data necessary to train the model.

Learn more about how Amazon Halo can help you achieve a healthier lifestyle.

“You would need the image of a person, as well as the 3D model of the same person captured at the same time, to train this model. That would be very expensive, because you’d have to capture data on a lot of different people with different ethnicity, age, gender, and all those variations,” Agrawal said.

To solve that problem, they decided that instead of building an end-to-end system (from the photo directly to the 3D avatar) they would build a system with two modules. The first starts from the original photo to obtain a silhouette of the user by segmenting the person from the background, producing a black and white two-dimensional image of the body shape.

The second module transforms the silhouette image into the 3D avatar. At this stage, the team decided to use synthetic data instead of the expensive 3D scans. The synthetic images were generated using graphics-rendering software that utilizes 3D models to generate their corresponding 2D silhouettes. Then they used these synthetic examples to train the system to predict 3D models from the silhouettes.

With this process, the Body feature can create personalized 3D body models of customers, so they can keep track of body changes in their health journey. They can also simulate how their bodies will change at different levels of body fat.

We’re making 3D scanning accessible, particularly in the context of human body composition and how it relates to long-term health.

“We’re making 3D scanning accessible, particularly in the context of human body composition and how it relates to long-term health,” said Prakash Ramu, an Amazon senior manager of applied science.  

Ramu, who has 13 years of experience in computer vision and image processing, noted that while Body doesn’t have the same level of fidelity as traditional 3D scanners for things such as muscle definition, it has high accuracy for overall shape and body proportions that are relevant for long-term health, providing an accessible and accurate in-home tool for people interested in measuring and tracking their body shape.

Ramu also noted that privacy is foundational to the design of the Halo. The body scan images used to build the 3D avatar and to measure BFP are automatically deleted from the cloud after processing and, after that, they only live on the customer’s phone unless they have explicitly opted in to cloud backup.

Halo Body’s potential to impact people’s health

One of the most important breakthroughs of the Body feature is that it grants easy access to a health indicator that is much more useful than body mass index (BMI), notes Antonio Criminisi, senior manager of applied science on the Halo team.

Doctors have known for many years that body fat percentage is a better indicator than BMI.

“Doctors have known for many years that body fat percentage is a better indicator than BMI, because it better predicts medical risks of cardiovascular disease, or even certain types of cancer,” he said. “This issue is particularly important when you become older. At that stage, weight loss tends to be associated with losing muscle mass, and that’s often not good news.”

Criminisi, who has been working for several years in computer vision and machine learning applied to the analysis of medical images, says most often lack of access is what prevents people from using BFP as a health indicator.

“What we’ve done is bridge that gap and make this technology a lot cheaper and easy to use,” he said.

The team knows it still has challenges ahead, but say they’re constantly looking to improve Halo.

“Building a customer-facing product for health applications is inherently challenging due to lack of data and a high bar on clinical accuracy and privacy,” Ramu said. “By building upon ideas in deep learning, classical computer vision and computer graphics, we have tackled the hard challenges in delivering a new product that reaches higher accuracy than alternatives such as bio-impedance scales. We are incredibly excited to share this technology with our customers and will continue to improve it over time to keep delighting our customers with exciting and useful new features.”





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart