The cornucopia of technology that just keeps spilling out of Qualcomm’s R&D labs has over time taken on a characteristic shape. The company draws from the depths of its technology portfolio various blocks of intellectual property that themselves represent years of development in the wide range of domains required to deliver a complete cellular experience, and assembles them in a unique combination with various adds and tweaks to optimize for a particular related function, finally slathering on a layer of software to make that function easily available to companies implementing it.
In the case of an announcement last week, that function is audio. Qualcomm is best known as a supplier of cellular modems, but it also has an important business in mobile processors. These complete systems on a chip (SoCs) are designed to handle all of a cell phone’s functions, one of which is audio. Those who use their phones for nothing but texting and social media may have forgotten that cell phones were once entirely for talking and listening. The first data type that went over cell phones was voice audio.
Naturally, it was important to get the quality right going in both directions. But of nearly equal concern was power consumption. The first law of mobile communications has always been: use as little power as possible for any given effect, for otherwise battery life will quickly evaporate. In addition, since smartphones have for the most part subsumed portable media players, a substantial proportion of modern smartphone activity is devoted to listening, either to audio directly (as in Pandora, Spotify, podcasts) or in accompaniment with video (as in YouTube, Hulu, Netflix).
These days, mobile audio quality good enough for voice might not be good enough for high-definition audio streaming for entertainment, gaming, augmented and virtual reality, and real-time multiplayer interaction. So, audio technology has had to meet increasingly demanding output quality standards, improving the sound that listeners hear. But now, the input side is becoming just as demanding. Talking is coming back into vogue with the advent of voice-activated artificially intelligent assistants (as in Amazon’s Alexa/Echo, Google Voice, and Microsoft’s Cortana). Today, these assistants are often embodied in table-top speakers with microphones in the home. In the future, they will be embedded in all sorts of products.
In response to this increased focus on both the input and output sides of the audio equation, Qualcomm introduced last week two new products in its audio line: the QCS400 series and the CSRA6640. (For the euphonious names, you can thank Qualcomm’s industrial customers, who don’t feel comfortable with part numbers unless they’re truly obscure.) The former is an entire audio system on a chip — including artificial intelligence capabilities — aimed squarely at the smart speaker/home hub market. The latter is a very small smart amplifier that fits into the power envelope of mobile and even IoT devices.
As always, the parts are built up from existing blocks of intellectual property, with a few tweaks and twists to customize them for the application, and some new blocks as well, all integrated into a single piece of silicon with careful attention to power consumption, the baseline requirement of mobile and battery-operated devices.
Specifically, the new QCS400 series has a “low-power island block,” a lovely name for a small area of the chip that does only one thing: listen for a specific audio signal and, when it arrives, wake up the rest of the device from its near-dead sleep state. This tiny block stays lit like some night light just in case you say a “wake word” like “Alexa,” “Okay, Google,” or “Hey, Cortana,” at which point it instantly fires up all necessary subsystems to carry out whatever command followed the wake word. This schema allows the smart hub to sit out there on standby with no AC power for 14 days, juiced and ready to go.
In addition to a quad core version of Qualcomm’s standard Kryo central processing unit (CPU), an Adreno graphics processing unit (GPU), and a Hexagon digital signal processor (DSP), the QCS400 audio processor also has a second Hexagon DSP with devoted vector processing registers for efficient execution of artificial intelligence tasks. Other blocks are dedicated to security processing, managing WiFi and Bluetooth connections, processing display output (optional, for a non-headless device), and, of course, audio interfaces.
The QCS400 comes in four flavors — 403, 404, 405, and 407 — with the 403 targeting home hubs and voice assistants, smaller smart speakers, and entry-level soundbars; the 404 aiming for smart speakers, midrange soundbars, and audio-capable mesh routers; the 405 designed for premium smart speakers, smart soundbars, and display-capable home hubs; and the 407 being reserved for premium smart soundbars and high-end audiovisual receivers.
Aside from other improvements, the CSRA6640 amplifier is a boil-down to a single SoC from previous four- and two-chip solutions. With the low-power profile of this efficient package, the amplifier can easily be mounted in small or mobile form factors. The quality of this efficiently produced audio is, however, uncompromised because it takes advantage of direct digital feedback amplifier (DDFA) technology, which was 10 years in development at Qualcomm. DDFA essentially compares output signal to input signal and corrects for any distortion that it detects. An additional feedback loop does the same thing for the power circuit, which can introduce noise to audio output.
From the listener’s perspective, DDFA reduces distortion, even at maximum volume. It also lowers background noise, leaving pleasant, immersive, high-quality sound.
Qualcomm’s traditional customers are mobile handset OEMs — like Google, Huawei, LG, Motorola, Oppo, Samsung, and Xiaomi — and carriers — like AT&T, British Telecom, China Mobile, Orange, and Verizon. Although, based on its wide-ranging business portfolio, the company does have customers beyond this core group, these new audio products could lead Qualcomm into entirely new markets.
Andrea Cantone, senior manager of product marketing in charge of the CSRA6640 amplifier, sees it this way: Not only will consumers enjoy smart input and output for better audio in more mobile circumstances, but businesses will find it a useful branding experience. Imagine if Hilton or Marriott Hotels decided to brand a white box version of one of these smart speakers, putting its own name in the low-power island block.
You walk into your hotel room and say, “Hilton, open the blinds.”
At which point it says, “Right away, [your name goes here, pulled up from registration information],” and starts hauling on the automated shade control.
Looking at the view of the city below and enjoying this feeling of command, you say, “Hilton, where’s a good place to grab some grub around here?”
And it responds by saying, “There are a number of places in the area. I’ll bring up OpenTable on your TV screen and help you pick if you like. Do you have a craving for any particular type of cuisine?”
“Well, yes, Hilton,” you say. “I rather fancy Chinese this evening.”
And Hilton, who is by now your best friend and confidant, says, “Here are your options for Chinese restaurants within one mile.”
This scenario, of course, presumes an agreement between Hilton and OpenTable and investments in things like automatic shade openers (which some hotels already have), but also the presence of a third-party integrator, which brings up a whole green field of partner potential for Qualcomm with the type of systems integrators who can customize standard products for specific large customers.
Nishant Kumar Mittal, Qualcomm’s audio product director, notes that the QCS400 allows for customization of keywords for local commands “without interfering with OEM vocabulary,” which means that companies basing their products on Amazon’s ecosystem will have Alexa and Echo available by default, but can drop in their own corporate or product name if they want. This capability is key for potential branding.