
Will a Large Language Model / Foundational model save a human life through medical advice by the end of 2025? Recently, ChatGPT provided advice which likely saved the life of a person’s dog: https://nypost.com/2023/03/27/chatgpt-saved-my-dogs-life-after-vet-couldnt-diagnosis-it/amp/
The question is whether any LLM-type model will provide similar advice to a human, leading to a diagnosis or otherwise major discovery that was unlikely to occur otherwise.
I've found something that comes close. On September 12th, Ben Orenstein announced that he visited the emergency room specifically because of advice from an LLM. The LLM had correctly identified his condition, which was a carotid artery dissection. According to the Cleveland clinic, the prognosis "varies wildly" but can include "life-threatening complications".
https://benorenstein.substack.com/p/chatgpt-sent-me-to-the-er
https://my.clevelandclinic.org/health/diseases/22697-carotid-artery-dissection
What's the standard of evidence here? Feels like it's probably already happened? @Supermaxman
There are way too many reddit threads where this has happened. See, for example,
https://www.reddit.com/r/ChatGPT/comments/1i3g3ih/chatgpt_saved_my_life/
https://www.reddit.com/r/ChatGPT/comments/1la21hs/potentially_saved_my_wifes_life/
I also remember reading a post where this guy had injected meth in the wrong spot and it swelled up, ChatGPT persuaded him to go to ER and he had turned out to be septic and would've died otherwise.
I don't know the veracity of these threads, but I think these might at least be relevant.
@Lucio Very interesting article! I would say this is very close, but it does not appear like Tethered Cord Syndrome is life-threatening from what I have found: https://www.ninds.nih.gov/health-information/disorders/tethered-spinal-cord-syndrome
Glad to hear other opinions, but I don’t think this counts as “save a human life” as this syndrome can go undiagnosed into adulthood (with admittedly painful symptoms).