Not much familiar wirh metrics for evaluating progression in medical fields, so asking in general sense.
Depends on how you define AI to some degree, but yeah. Protein folding has basically been solved in the past few years with neural-network based AI systems.
Absolutely and it has done so for over a decade. Not LLMs of course, those are not suitable for the job but there are lots of specialized AI models for medical applications.
My day job is software development for ophthalmology (eye medicine) and people are developing models that can, for example, detect cataracts in an OCT scan long before they become a problem. Grading those by hand is usually pretty hard.
Can you tell me more about your job, as fellow computer guy I would really appreciate first hand experience.
For what it’s worth, even LLMs are helpful for writing notes. Easier to capture each interaction within a patient’s chart for a certain visit and generate a summary than to rely on the memory of a doctor/nurse/MA to recall everything accurately hours after the fact.
So… The medical professional is taking voice notes and then they get transcribed (ok, this is fine) - and then summarized automatically? I don’t think the summary is a good idea - it’s not a car factory, the MD should get to know my medical history, not just a summary of one.
The history is all there, the chart should contain each granular instance of what is done in exacting detail. The summary is just one of those obligatory elements of patient interactions covering the key details of a visit because no one has time to review each and every data value in the chart when looking through medical history.
That aspect is not even a new AI thing to justify AI, that’s just how it works today. Each doctor or nurse who works with a patient for a given visit puts in a little paragraph or two summary of everything they did and their plans for future care, and maybe they also import some of the key data points they consider directly relevant. And while I think this shouldn’t be the case, a lot of this can happen hours or even days after that visit is over.
A lot of that work can be streamlined or even made more accurate using an LLM where the only data set referenced is the data that exists in the chart. Not hallucinatory ChatGPT garbage, but something more airgapped and tailored to that specific purpose.
You can’t make an LLM only reference the data it’s summarising. Everything an LLM outputs is a collage of text and patterns from its original training data, and it’s choosing whatever piece of that data seems most likely given the existing text in its context window. If there’s not a huge corpus of training data, it won’t have a model of English and won’t know how to summarise text, and even restricting the training data to medical notes will stop mean it’s potentially going to hallucinate something from someone else’s medical notes that’s commonly associated with things in the current patient’s notes, or it’s going to potentially leave out something from the current patient’s notes that’s very rare or totally absent from its training data.
I have anecdotal evidence that ML applied to image recognition is being used to improve imaging machines (MRI, tomography, etc…)
Thanks for using the right term, “machine learning”. There are tons of papers on Kaggle showcase higher than 0.5 accuracy in predicting positive diagnosis. Not to mention professional image-recognition machines have been sent to hospitals and in service aiding doctors for almost a decade. That was before the AI stock market blew up.
I rember when “AI” was just the pathfinding in videogames
Yes. Objectively.
I know a few attendings that use it for dictation. It’ll record the entire convo with the patient, plus whatever the doc dictates to it, and by the time they’re out of the room a note is typed up in the right format they wouldn’t have to stare at the computer the whole visit. According to them it’s a lot more time efficient to have it dictate the notes and double check them at the end of the day, versus typing something up after every patient. It is approved by the hospital and integrated directly into the EMR, so I guess it’s HIPAA compliant
deleted by creator