Earlier this year, we reported a speech recognition system trained on a million hours of data, a feat possible through semi-supervised learning, in which ...
At next week’s Interspeech, the largest conference on the science and technology of spoken-language processing, Alexa researchers have 16 papers, which span ...
Today is the fifth anniversary of the launch of the Amazon Echo, so in a talk I gave yesterday at the Web Summit in Lisbon, I looked at how far Alexa has ...
Knowledge graphs are data structures consisting of entities (the nodes of the graph) and relationships between them (the edges, usually depicted as line ...
Large online data repositories like the Amazon Store are distributed across massive banks of servers, and retrieving data from those repositories must be ...
Automated reasoning is the algorithmic search through the infinite set of theorems in mathematical logic. We can use automated reasoning to answer questions ...
Most modern natural-language-processing applications are built on top of pretrained language models, which encode the probabilities of word sequences for ...
State-of-the-art language models have billions of parameters. Training these models within a manageable time requires distributing the workload across a ...
The Federated Logic Conference (FLoC) is a superconference that, like the Olympics, happens every four years. FLoC draws together 12 distinct conferences on ...
In an Amazon Science blog post earlier this summer, we presented MiCS, a method that significantly improves the training efficiency of machine learning ...