• Bryan Mosher

How Muse’s Powerful Machine Learning Tool Works

The Muse tool provides clinical teams with additional insights to identify a patient’s final days. Users of the technology all agree that it’s a valuable opportunity to intervene at a critical moment in a patient’s life. The insights given to clinicians allow for a wholistic approach to ensure that everyone from the patient to the family receives high quality healthcare services. It also empowers clinical teams to coordinate resources to patients that need immediate attention. Users of the tool agree that it transforms the care process. But how does the tool actually work?

At Muse, we use a wide variety of data - from semi-structured to unstructured – to predict patient decline. Our algorithms are deep learning based neural networks that operate at the visit grain, but much of the substance lies with our data layer and pre-processing.

A robust data layer

Our approach to data ingestion follows an Extract Load Transform (ELT) style model. The data layer is built upon the Snowflake platform, because of its ability to handle extremely large amounts of data with ease. Snowflake and its python connectors are a dream as a data scientist. The technology allows you to load millions of rows directly into Pandas DataFrames in a matter of seconds. Snowflake’s capabilities in shaping aggregate data are also unparalleled. By handling complex queries in an incredibly efficient way, we’re able to explore modeling scenarios that would normally bog down entire teams.

Preprocessing pipeline

As mentioned, Muse uses a variety of data and operates at the visit grain. Unstructured documents come in the form of visit narratives, medication lists and assessment data, which are like head-to-toe questions on the patient’s state. We leverage unsupervised models like FastText and Doc2Vec to extract numerical features from the different sources. This unsupervised feature extraction is then shaped longitudinally and used in downstream classifiers.

The primary reason we use unsupervised algorithms is because they do a phenomenal job at extracting the meaning of a document like a nurse’s visit narrative. They work by not only extracting certain key words but the overall context of the document. They are also fantastic at generalizing new data which is highly desirable in the hospice world where the data is non-standard.

Tying it all together

Once we have the embeddings for a particular category of data, we shape a data structure at the visit grain that includes a window of the previous visit history for a patient. This key step gives context to the network about where the patient started and their overall decline. The result is then fed into an ensemble model that integrates the different data sources over time.

Interested to learn even more about what drives the engines of the Muse tool? Click below to contact us today.

80 views0 comments