Alphabet, through Verily Life Sciences, is invested in applying technology and data science to health care. Google’s in-house Brain team is as well, with the machine learning division running a pilot study that applies existing voice recognition technology to transcribe medical conversations.
Google notes that a large part of a modern doctor’s day is spent managing records and adding notes to patient files:
Unfortunately, physician’s routinely spend more time doing documentation than doing what they love most — caring for patients. Part of the reason is that doctors spend ~6 hours in an 11-hour workday in the Electronic Health Records (EHR) on documentation.1 Consequently, one study found that more than half of surveyed doctors report at least one symptom of burnout.2
This work is tedious, but vital as “good documentation helps create good clinical care by communicating a doctor’s thinking, their concerns, and their plans to the rest of the team.”
Building off the rise of medical scribes that listen to patient-doctor conversations and takes notes, Google is investigating the use of voice recognition technology already available in Assistant, Home, and Translate.
In a paper, Google notes that it’s possible to leverage Automatic Speech Recognition (ASR) models for advanced transcription of medical conversations. Their work goes beyond existing solutions by being able to record both the doctor and patient.
Furthermore, it’s not limited to predictable medical terminology, but supports “everything from weather to complex medical diagnosis.”
Google is now working on a pilot study with Stanford University to investigate “what types of clinically relevant information can be extracted from medical conversations to assist physicians in reducing their interactions” with medical systems. On the privacy front, Google notes patents will have to give consent and will be de-identified to protect their privacy.
Check out 9to5Google on YouTube for more news:
FTC: We use income earning auto affiliate links. More.
Comments