Voice trading has been an instrumental part of the institutional trading ecosystem for decades. Even with the advent of electronic trading, voice trading has continued to play an important role in daily trading workflows. But voice trading is not done in a box and background noise has always been part of the equation, especially on the noisy trading floors and broker pits of yesteryear. Now, with the shifting dynamics of traders working remotely, voice trading background noise has taken on a new shape and, as a result, so has the technology used to detect it.
How big of a problem does background noise really play in voice trading? It starts with the quality of the microphones traders use. Whether at home, in an office or on a boisterous trading floor, traders alone cannot control what their microphones can or cannot pick-up.
Historically, phone turrets required traders to be no more than one inch away from the microphone. Not surprisingly, traders hated having to speak from no more than one inch from their microphone in order for their device to capture a clear enough sound. While leaning into the microphone can get old quickly, most traders also don’t like using headsets. Instead, they use a device of their choosing, many of which have become more portable and less cumbersome.
However, with advancements in innovation and mobility come new challenges – in this case, a new assortment of background noises.
Today, microphones are developed so traders can be as much as a foot away. While these newer microphones allow traders to speak from a less intrusive distance, the “critical distance” – the distance at which background speech and noise can be detected – is also expanded. New microphones are addressing a physical issue for traders but creating a new issue for engineers and even compliance teams by picking up more noise around the room.
At the outset of the COVID-19 pandemic last year, all traders were forced into the confines of their homes, creating an entirely new set of background noise that microphones began to detect. Despite the advancements in trader microphones, the technology was not yet capable of removing noises such as babies crying, lawnmowers, or kitchen appliances like dishwashers.
Even though many traders went home to ostensibly silent homes, these subtle but unfamiliar sounds were being detected by today’s noise detection algorithms. Items like laptops and cell phones are changing the way background noise is detected, while even the somewhat trivial clicks of moving a computer mouse, dangling wires or even the placement of a computer can change the way sounds are picked up in the microphone’s path, creating a nightmare for audio engineers.
With recent advancements in Machine Learning, systems can be trained with a set of good and bad case scenarios and gradually learn through speech and voice detection to the point where non-voice sounds can be completely eliminated. In some cases, even small reflective office reverberation can be reduced, improving intelligibility. Trained Machine Learning models that are employed today are exponentially better than voice trading technology that was available twenty years ago. Computing power today is also more readily available, enabling these capabilities to take place without any noticeable impact to audio.
The microphone is not the sole issue. It’s also who is talking on the other side of the line. While there may be minimal background noise on one side, the person on the other side of the conversation may have a multitude of noises that are being captured by their microphone. When you don’t have a good microphone it’s hard to detect background noises such as chairs moving, tapping keyboards, and paper crumpling.
The goal is to reduce that background noise coming in from as much as twenty feet away. Noise reduction algorithms have been developed and are being incorporated into microphones today with optimal accuracy at filtering out unnecessary background noise. Prior algorithms would be tricked by various background noises such as pet sounds, appliances, and any other home-produced complex noises.
Source separation allows algorithms to separate speakers in overlapping conversations taking place simultaneously while discerning what background voices can be weeded out.
While directionality is difficult to detect with a single microphone, modern algorithms and microphone enhancements are able to identify speaker directionality with more than one device – the location from which voices are being captured – and are equipped with much better audio quality, ultimately helping to mitigate background noise issues for engineers and compliance personnel.
Many of the world’s largest CSPs and technology vendors have employed teams of data scientists to develop these capabilities, particularly with the increasing intersection of audio and video. AI and Machine Learning will play a significant role in the advancement of these capabilities, as well as more complex areas such as loudness and tonal balancing, reverberation reduction and immersive audio trading desk experiences.
The next step in advancing these algorithms and microphones even further is adjusting the technology specifically to how the trading floors of today look. While many of these algorithms are being developed individually, some of the voice trading problems are typically much less tolerable to end-users and present immediate challenges. A key element will be securing more source data in order to continue innovating and maintain high-quality results as the environment continues to evolve. Firms that are not already looking into these capabilities will indeed be left behind.
To learn more about how Cloud9 can help reduce background noise in institutional voice trading, contact us today.
Andrew Pappas is a Chief Architect at Cloud9 Technologies. He joined Cloud9 in 2014 after spending nearly 17 years at IPC Systems as a Senior Software Engineer. He holds a BE in Electrical Engineering and Biomedical Engineering from Farmingdale State College.