Machine learning powers a slew of features across Google’s apps and services. The company spent I/O 2019 and today highlighting various accessibility use cases. That happened to spur a discussion on Twitter that concluded with Google acknowledging that Android ML fart detection is technically in the realm of possibility.
To provided some context for the reveal, Google this morning detailed an upcoming update to Live Transcribe as part of Global Accessibility Awareness Day. Next month, the Android app will be able to detect and show “non-speech audio cues” in addition to the existing speech transcription capability.
This includes clapping, laughter, music, applause, or the “sound of a speeding vehicle whizzing by.” Transcribed speech will continue to appear at the top of the screen, while day-to-day sounds will be highlighted below.
According to Google, “seeing sound events allows you to be more immersed in the non-conversation realm of audio and helps you understand what is happening in the world.” For example, you will be able to hear a door knock, whistling, or a barking dog.
Hilariously, the official Android Twitter account replied with “Yes, our ML can do it, but it’s difficult acquiring a test data set.” As seen with Project Euphonia, if enough audio samples are collected, machine learning can be leveraged to recognize all speech and sound patterns.
ML is good at finding such patterns, but as @Android points out, collecting recordings of farts would be “difficult” and embarrassing. Then again, it’s not the most impossible task Google has achieved. Additionally, there is a use case for Android ML fart detection in Live Transcribe given that flatulence is objectively a sound that informs “what is happening in the world” and the social context of a room. Regardless, April Fools’ this year had one last hurrah.
Yes, our ML can do it, but it's difficult acquiring a test data set.
— Android (@Android) May 16, 2019
FTC: We use income earning auto affiliate links. More.