Curious Cat
After my father's death, I had some strange stuff happen that nudged me to learn about the supernatural. What I've been finding out is life is way more complicated, strange, and wonderful than I'd ever dreamed. The best part? Science is starting to catch up. I focus on the place where science and supernatural collide. What does it mean to be a soul in a meat suit? All episodes are made and offered in love. *All Curious Cat content is owned and operated by Storm Mystery Press LLC
Curious Cat
Dr. AI Will See You Now
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
After covering the article about how bad AI is at diagnosing ER patients in a recent Doom Hole, I strong-armed Jesse into dedicating a whole episode to the subject. I hope it serves as a warning, and helps you to inform someone in your sphere to ALWAYS seek a real human being when dealing with health issues, mental OR physical.
Let's get into it.
(JH) Today on Substack I experienced a first. I was tagged on an AI-related article about a new law proposed in Utah, or maybe it's just an informal agreement at this stage. The agreement is between an AI company out of San Francisco to provide prescription refills for psychiatric medicines. The need, the state says, comes from overworked doctors, nurse practitioners, and pharmacists, and they see the AI bot to be a relief valve.
I have real concerns about chatbots and human bodies. They mix like oil and water, at least at this moment in time. Chatbots, we are told, have black boxes of data training, programming, weighted algorithms, that we cannot know or see or tweak or question. They have proven to be sycophantic, give us what feedback it thinks we seek, and do not catch subtle cues. The human body, the degree-on-the-wall-nurse practitioner or doctor, is genetically predisposed to catch subtle cues. Thousands of years of fight or flight, of surviving DNA because our ancestors caught a subtle hint of trouble or whatever? It pays off when we interact with other human beings.
In my response to that article, I mentioned Zoloft because it is personal. It can feel like a last-best-lifeline to the patient and when they experience severe side effects, like say, increased suicidal ideation? Scared to have that lifeline yanked away, they may not admit to them. But a human health practitioner? They might see the hesitation and ask a follow up question that could lead to helping the patient find a course of action that is truly lifesaving. A chat bot? Its degree framed on the wall holds a blank piece of paper.
Sources:
5 Major Disadvantages of AI in Healthcare, Keragon.com
Disadvantages of AI in Healthcare: What CTOs Should Be Aware of, TechMagic.co
Google's healthcare AI made up a body part - what happens when doctors don't notice? The Verge
Support Curious Cat, an independent, human-made podcast!
Anxious about AI? Take two minutes to contact your local politician and ask them to tap the brakes on this technology. Still worried? Contact one of the orgs below and get involved.
But for today, hug your kid, cook food and really breathe in deep as it simmers, walk in nature, brush a cat, donate to the food bank, brew a cup of tea, or draw a five-minute portrait of your dog.
***Is AI the Devil? on Substack!***
Hero Organizations:
Center for Humane Technologies
State of Surveillance, an organization that helps foster online privacy
Fucking clankers. Fucking clankers. Recording in clankers. Recording in progress. Recording in progress.
SPEAKER_04Oh my gosh. Uh oh, hello, Jesse. I feel like we should start with like actually a normal greeting. Hello, how are you?
SPEAKER_03We don't do that. Uh I no, I I'm I'm tired. Uh I'll tell you why because uh all all the all the all the Substack kids kept me up late last night.
SPEAKER_05Oh my god, was that fun you did a live with our new friend Sam?
SPEAKER_03The spec guy, you know, uh Sam just he he hit me up in the afternoon. We talked about it uh a couple of weeks back. Yeah, and he hit me up and he goes, uh, hey, you want to go on? Uh you want to try to do a Substack live tonight and just see what happens? And I was like, Yeah, sure, why not? You know, so was it a ball? Oh, it was fun. We stayed on for like two hours.
SPEAKER_05Holy crap! Yeah, did you get some of that male energy? Because you know, we've been talking about that it needs to be a thing to have more men in this this space.
SPEAKER_03Yeah, I mean, we we nerded out and uh like everybody, but I mean, like I uh pretty much everybody that was uh that was uh in the chat, you know, like Natalie was there, Rachel is there, Abigail was there, Abigail, our new friend, yeah, tons of new people, you know. Uh like people I so many people I can't even remember everybody right now. So it was kind of crazy everybody that showed up. Uh Jen Sam, who uh is uh that's uh that that's Sam's wife, Jen Sam. Yeah. And uh yeah, and we just uh we I I think uh we're starting a podcast, yeah, where we're gonna make Natalie explain Harry Potter to us. We're just gonna act out the book and everybody. I want to be a part of this. Yeah, I know that we were like uh we're a little Jen Picker character. Like I I I think uh I think Sam wants to be Hagrid. Yeah, and uh I'm not sure who Natalie landed on. Uh Abigail's talking about being Dumbledore, which we thought was great, which is like just be Dumbledore, like no, like no beard, no not just just as Abigail.
SPEAKER_05Wait, who's Rachel gonna be?
SPEAKER_03Uh I think she's going for uh Snape. Oh, I love Snape. I'm so Slytherin. Yeah, and I and I don't really know Harry Potter that well, so I told them I just want to be Gary Oldman. Oh, that's so cool. And they're like, and they were like, uh yeah, you mean I'm like, whatever, just just call me Gary Oldman. I just want to be Gary, I just want to pretend I'm Gary Oldman.
SPEAKER_05Maybe I'll be all the magical creatures that they come across the entire book. I know that would be so cool, right? And that's what I want to be. I want to be niche. I get to be something different every time.
SPEAKER_03I know, and then uh so we're just gonna read from the book until they sue us. I mean, that's our plan right now.
SPEAKER_05And and we'll make it so popular and so happy that they can't possibly sue us because it's like suing what gummy bears. You're not going to, you'll be so unpopular.
SPEAKER_03What if it blew up and we were we were like just as popular as the new Harry Potter show?
SPEAKER_05Oh my god, did you ever get into the Potter puppet shows?
SPEAKER_03No, I I don't see. I'm the fuck is wrong with you.
SPEAKER_05Okay, I thought you were great until then. Oh, it's so funny. Like they're cultural references. Somebody had like these little puppets on little popsicle sticks, and then they did these little AI things. And it's like the Harry put I'm just not gonna do it justice, but I'm gonna try. Ron and Harry in the popsicle stick come over and hear Snape standing by himself, and they're like, bother, bother, bother. And then Snape whacks the puppets. I mean, it's so funny, and it's like a cult, it's a reference in my family. We're like bother, bother, bother when we text each other too much, and we know we're talking about that. You have to deep dive into that. Your kid, your kids are that yeah, they grew up with it. Yeah, they grew up with it. We did the book release party only once. They were old enough and we did it just once, but where you you get all dressed up and then you go at midnight, and so we were there, we got our book, and it was just so uh, it was so cool.
SPEAKER_03I stumbled I stumbled into a Walmart drunk one time when when like the last book came out, and like they were like they were literally putting the pallets out, you know, at the morning, you know, and it's like midnight. I I all I know is I walked out of there with the last Harry Potter book, and uh yeah.
SPEAKER_05Oh, I love them.
SPEAKER_03All I remember.
SPEAKER_05I love them. I we listened to them on um audio when we would do the commute to the schools. I mean, the audio books, they we still have them. I'm like, those are memories of us. Oh, this will be so fun. Okay, this happens.
SPEAKER_03I'm so excited about uh I I mean, I know people love that. It's just I was I'm of the Star Wars generation. Yeah, me too. I am too. Yeah, you know, so um when Harry Potter, but like if I would have been your kid's age and you know, I would have Elastic Book fared the shit out of that stuff. Yeah, yeah.
SPEAKER_05I I just that I think what happened, and it's so relevant to what our subject is today. I got super sick. Here I am, a mom with two young kids, and I got super sick, and I could my my big brother lived right down the road from us with his kids, and he's like, You need to stay down and get better for a couple of days. So I'm dropping you off some books and you're gonna love them. And I knew about Harry Potter, but I never read any of them, and he dropped off the whole stack of all of them, and I just sat there, so I called it my Harry Potter virus because I just sat there and read, and then I just pretended to be sick for a couple more days so that I had an excuse to just read it out.
SPEAKER_03So you keep reading, yeah.
SPEAKER_05Yeah, because when you're a mom, you never have a sick day.
SPEAKER_03I mean, we saw the we saw the movies in the theater when they came out and stuff, yeah, you know, but uh um I I wasn't I didn't get caught up in the mania of all, you know what I mean?
SPEAKER_05Yeah, I do. Well, this week we are talking about um, and I just have to preface it with this. So after covering this article about how bad AI chatbots were at diagnosing the ER patients in that recent doom hole, you brought the article and we laughed our asses off, but also we were kind of scared. So I feel like I strong armed you this week to doing a whole episode on this subject. And I'm just hoping it serves as a warning to everybody listening and helps all of us inform someone in our sphere to always seek a real human being when dealing with health issues, mental or physical. And so today, what happened to me on Substack is so perfect because I experienced for the first time, Jesse, you got a precursor to this, but I was like tagged in an AI-related article conversation. And it was a, I think he's a uh pharmacist that is doing this on his own. And he was like, I just wanted some other voices to weigh and in on this. And he uh tagged me along with other people. So apparently now I'm some AI expert. Um, I am definitely no Karen Howe. I can't even pretend to put on one of her shoes, but I have been doing a deep dive for six months now. So um I he was asking about our opinions on this recent policy change, or I don't know if it's a law that's proposed and they're um working on enacting it in the state of Utah. And what it is, it's right now is an informal agreement between this San Francisco firm that's an AI company, and Utah. And what they would be able to do is use AI to refill prescriptions for psychiatric medicines. It's very specific. The need, the state says, comes from overworked doctors and nurse practitioners and pharmacists just inundated with paperwork. And they see the AI bot to be some sort of like a relief valve. And I was, I had to sit with it for a minute, drink my coffee, and I had some serious opinions about it because I realized that I have a boundary, that when it comes to human bodies and chat bots, I have serious concerns. They mix like oil and water, at least right now. Chatbots we're told have these black boxes, which Jesse and I have reported on of data training, of programming, of weighted algorithms that we can't know or see or tweak or even question. And we can't regulate them either. They've proven to be sycophantic, give us what feedback it thinks we seek, and do not catch subtle cues. The human body, though, the degree on the wall nurse practitioner or doctor is genetically predisposed to catch subtle cues that AI can't. So thousands of years of our fight or flight of surviving DNA because our ancestors caught a subtle hint of trouble or whatever and kept our line alive, it pays off when we interact with other human beings. So, my in my response to the article, I mentioned Zolof because the um pharmacist, a doctor that had started the thread, mentioned Zolop first. And it's personal, it can feel like the last best lifeline to the patient when they experience severe side effects, like maybe some increased suicidal ideation. They're scared that the actual pharmacist or the doctor or the nurse practitioner is going to yank away their lifeline if they admit to having those side effects. So they may not. But a health practitioner that looks you in the eye and says, Oh, did you, but did you see a side effect? And they dig down and ask more questions, then you might disclose that to, and they might even help you find a path that is healthier, better, more of an actual lifeline for you, where like a chat bot being sycophanic is gonna go, you know, just not even see the subtle cues and just refill the prescription. And I'm not saying that it could cause people to die, but I'm not saying it's a good thing. I do not endorse that. And oh, and also I just wanted to point out that AI chatbot, their frame on the wall, it's an empty blank piece of paper. There's no fucking degree. So I come into this with such a bias, Jesse.
SPEAKER_04Oh my god.
SPEAKER_03So the chat bot that checks you over to refill your subscription is gonna get you have the best mental health problems ever. You're doing so exactly, yeah. You know, it is it like is that what we're afraid of? That it's gonna get to that level of uh sick of fantic.
SPEAKER_05Yeah, or even sending you to like, oh, it sounds like you maybe they will pick up subtleties like you're anxious, so then they'll send you like a triple supply of it. I mean, I and which is bad too. Like, I just feel like when it comes to human beings, that's when I delineate a line. The actual human body, if it's going to be affected by the AI chatbot, we need to just say no because we are told we don't know what is on the other side of that chatbot, and they can't change or modify it, even though we know they're abjectly lying. But it's like it doesn't matter. It it seems like a dangerous cocktail to me.
SPEAKER_03The point that I made in the and you you you know, you tagged me on that thread where you where you were answering his question, is um these are people's brains that we're that we're dealing with here. It's not natural gray matter, and and you know, and and and I guess where I'm coming from is um so we're gonna let a thing that doesn't have a brain cannot pick up on cues that is just you know it is basically it it's not a thing, it's not sentient, it's not real, it it has no way to know.
SPEAKER_05There's no there there.
SPEAKER_03Yeah, there's there's there's nothing to it, but so when they say they don't that they say that they don't have enough medical professionals to um keep up with this, right? What that translates to me is we don't want to pay enough mental health care workers to uh keep up with this. You're exactly right. Yeah, it it it's so that that's that's always where I where my brain goes. And I and I feel like uh with that, not with everything, you know, I'm wrong about all kinds of stuff. But when it comes to like uh, hey, we just don't want to spend money on this, uh I'm usually pretty, I'm usually pretty good with that. And yeah, you see it. This this screams that to me. It's a I I I don't have the specifics. I'm I'm talking off the cuff here, but um I I I did see an article this morning um that had to do with there was a hospital. I think it was more like a it was more a telehealth thing, but I think there was an AI component. Yeah. Like uh they didn't have a they didn't have a doctor to care for this this kid on the staff, you know, and and and they ended up dying. Oh my god. Yeah, and the family uh obviously is suing, you know, because like uh that feels like medical malpractice. How do you not have a doctor on the you call yourself telehelp? How are you how are you trying to how are you trying to run like an ER, like telehelp? You know what I mean? Like with a toaster, like what do you do? Just walk in and like log into the kiosk like you're trying to like board your fucking flight at the airport, you know? Basically, yeah, it's and that that's okay, that's that's not gonna work out.
SPEAKER_05It's not. And I I tried to look for sources that are a little bit less biased than me. And it sounds like you two were both very human-centric. Um, and I found this one from the decision lab, and I linked to their article in the show notes. They had this great article that listed some positives of AI and medicine and some of what they called shortcomings. So the positives included some of the things we've talked about. AI might ease the workload of physicians and nurses because charting, uploading medical records, and lab results takes time. They also said AI might improve the accuracy of data inputs. They said might. So I'm gonna highlight that myself. And the result of both of those could mean more face-to-face time for patients with their doctors and nurses. And go ahead and bank that little information because I refute that. I found even more documentation that says the opposites. Some of the shortcomings that they touched upon included this AI is not able to understand human nuances, which we talked about. And I would say that is huge in capital letters, which if you type it into your smartphone, it actually throbs, which is kind of creepy.
SPEAKER_04So don't text somebody the word huge in all uppercase, because for some reason, Apple thought that'd be cool if it kind of throbs.
SPEAKER_03Huge, huge. This is one of those times where I'm glad I didn't get an iPhone.
SPEAKER_05But they also said that bias in, bias out, right? Warning signs we've seen from all the AI models, right, Jesse, because we've talked about it that the data input into these systems is weighted for way more males than females and way more for people that happen to be white than any other group. Another thing they did not touch upon, but I will I promise a little more later, is children versus adults. Almost all of the medical data in the models is for adults, which is a serious problem, especially when we're talking about dosing prescriptions and other factors. They mentioned potential data, privacy issues. And again, this plagues all of AI, the sharing of sensitive data, like your health information. AI models can go bad quick. And maybe even adjust their rates. Then there's the risk of data breaches, you know, um cyber break-in or conflicts of interest between private companies and patients. So there's even more disadvantages, but I can't disagree on this.
SPEAKER_03Everyone willingly, not just willingly gave away all of their genetic information to uh to a tech company, but also paid$99 or whatever to do so. Oh my god, is that that 23 in me? Yeah, yeah. That that that heritage, that thing that's like, yeah, well, am I Scottish or Irish, you know, or whatever, you know.
SPEAKER_05Do I have a little Neanderthal in me?
SPEAKER_03That's what's on a lot of people. Exactly. And and so uh not only did who knows who got a hold of that, that was a huge health collection. Yeah, and plus you were you're right, it was voluntary. And you were you were not only was it voluntary, you're paying them to like steal your info and like sell it off.
SPEAKER_04Oh, that's so sad.
SPEAKER_03And that's that's one of the things that I was right about from the you know, that from the I was just like uh I'm like you had an instinct, they're stealing. They're they're just they're just stealing all our genes, you know what I mean.
SPEAKER_05It doesn't take uh much of a fiction writer brain that we both have to say, oh, that's interesting. Who would steal it? Some group that's nefarious, and they would use that DNA to create unfortunately.
SPEAKER_03I've seen way too many movies and read too many way too much science fiction, but uh yeah, that's a good point, right?
SPEAKER_05To our uh you know covering all this stuff. Um, I did find that a place called carragon.com lists a couple more disadvantages. So there's ethical questions. So AI raises questions about patient privacy and consent. And we're talking about sensitive data, as I mentioned earlier. Do you want it out there that you're taking blood pressure medication or suffering from ED or whatever your thing is? And then there's opacity. AI decision making, as I talked about in my response to the doctor on Substack today, it's often in a black box, so lacking transparency and often making it hard or damn near impossible to understand how AI reaches its conclusions. And that's true. Data dependence, AI's effectiveness is tightly coupled with the quality of the data it's trained on. So you shove in this poor biased data, it leads to inaccurate outcomes. And in fact, we haven't been allowed to see any AI model data records. None of the companies are disclosing it. And then there's also the problem of diagnostic.
SPEAKER_03Fancy that.
SPEAKER_05Imagine that. Yeah, go figure. They talked about diagnostic over reliance. So that's like becoming overly dependent on AI diagnostics. And this is what's really scary. That means that the opinions and the nuanced judgments of experienced actual human healthcare workers will be disregarded, shoved to the side for AI. And that just pisses me off. We've already seen this.
SPEAKER_03I uh again, money to do it, all has to do with money. You know, that it's even that this is already out there, you know, like the one of the most popular shows in the country right now is The Pit. And uh they they have a they they have like a they have a secondary storyline where um you know a new new doctor comes in and like she's created this uh AI thing to like help charting and and stuff. So that they're already having the argument on the show. So if they're you know, if it's worked its way into uh possibly media, made in mainstream media, you know, it's uh yeah.
SPEAKER_05But what's scary uh beyond, I'm glad that they're addressing it in a mainstream way, but we've seen it also like imagine being a doctor and you're like, I have a feeling I need to call for more tests because this is kind of a variable I'm not comfortable with, and then AI, your hospital management says we're gonna weigh AI's opinion uh like heavier than yours, and they toss it out. We're already seeing that we saw it in Audible. Um, when remember when Amazon, it was like two or three years ago when Amazon all of a sudden all the books that were audio um narrated by human beings were dropped down in the algorithms for these AI bots that were narrating books. So we're seeing these biases for AI over human made in every space. It's terrifying.
SPEAKER_03They're trying to change the business model where like they keep there. They keep all the net. Well, just think about it. Yeah, if if you're sending all the the human stuff down and pushing all the AI stuff up it's so you know the the like Kindle or whatever Audible gets more of the money yeah you're right there's no sharing yeah there's no they're not they're not paying they're not paying narration they set up their own huge ai thing where you like hey you press a couple buttons and like your your your Kindle turns into a you know it gets read by a clanker and you know and uh yeah and and that's an extra X amount of dollars to like do that yeah but then you're just like you're like oh I got an audio book though people like audiobooks and like oh I don't have to go out and find a you know and and I guarantee you they're gonna do that they're gonna do the same thing with you know you're you're gonna basically upload like the shittiest like unedited manuscript ever to Kindle and it's gonna edit it for you and format it automatically it's gonna throw some clanker ai cover on it and then it's gonna spit an audio book out for you and it's gonna be a one-stop shop.
SPEAKER_05It's so disgusting because all of the parts of that just make me so mad. We actually just for the listeners' sake and background we're trying to get a hold of a like an author the a well published author that has lots of books out there and um to come on the show and talk about it and specifically talk about the process of coming up with the idea of the book fleshing out characters coming up with you know what it is to be a human being actually creating this art and then not only creating it but editing it proofreading it making up a cover that matches and all the things that AI can't do. And then on top of it then they're throwing out a product that's highly curated very cerebral and it's being compared with the algorithms of AI bought generated books that are ripping off everybody and being spit those awful covers that where everything looks like a fucking cartoon right now I am so sick of them you're right it's because they favor training us they're training us for that this is what book covers look like and we have to keep i that's why I love your barn sale because that's not what we crave that isn't I love old there's every book cover there's an extra barn sale this time yeah Jesse okay then I'm gonna point people to the okay buy me a coffee we have a buy me a coffee link at the bottom in our show notes because you could instead of buy me a coffee think of it as buy Jesse a barn sale treasure that's that's what we should change it to buy something at the barn sale pass it on to uh pass it on to actual real people real real retailers yeah exactly and when they don't sell none of that clanker nonsense at the barn sale they don't do that i'm gonna have some incentives for that buy me a coffee too because I have so many AI uh specific books and I'm not gonna fill up my shelves with them and have them in my background to look like some fucking expert on CNN I'm not gonna do it. So I when I'm done with them and I've written all my notes down I want to just pass them on. So if you're in the US I will just ship them off to you whoever wants them and that'll come up I'll have I have lots of witchy books too so we'll have some little incentives. I didn't need to do a little ad in the middle it's a good little ad. I think so well and just going back to some of the cons because it seems like it is a con. It's a medical con when we think of AI in chat right now, chat bots, but I talked about the security breaches do you want some nefarious entity having access to your DNA thanks to a computer hack I actually don't want that. The other thing they're finding is error propagation. That means that errors made in early diagnosis can be amplified if AI continues to learn from that incorrect data and it will reinforce those mistakes. And then I went into the five biggest cons of AI in healthcare I mean it literally is modern day snake oil. Number one lack of the hoodoo operator oh my god it told you yes I feel like we should have a special noise for this sell them uh sell them sell them tonics for that whooping cough that's what it feels like yes for our cholera so that we don't become like the organ trail where all props that's that's a clutch song by the way all props to clutch all props to clutch so number one lack of personal touch so we've already talked about this we've debated it actually not debated it but AI can't replace the warmth and personalized care that comes from humans in healthcare or even a doctor or a nurse practitioner that's known you for a number of years and they know when something's atypical that maybe even you don't notice number two data privacy concerns. I covered this but worth repeating I'm just gonna say it three potential for misdiagnosis. We know no AI system is infallible. In fact let's go ahead and say all AI systems they are fallible and that should concern every one of us especially when it comes to our health and our medical diagnoses number four AI is expensive. And this comes with a you know if we were doom hauling right now we'd have the happy Santa bell because oh you have it you had me you said doom hole you had me reaching for the wrong bell okay ring for the happy bell AI is expensive smaller medical facilities might not have access yeah I'm gonna read that again smaller medical facilities might not have access which means let's go find the smaller medical facilities folks if we have to go in and see a health professional that's the key right because they can't pay for these big AI models which may be a good thing. And they probably don't have giant hospital boards and management that is pushing them to integrate it into their daily life.
SPEAKER_03And then the downside is they're all getting closed for lack of funding and are being snatched up by venture capitalism.
SPEAKER_05Yep so then maybe our small actions which you know I'm sorry to be a Pollyanna but here I am it our small actions right now can help keep some of those smaller facilities in our area um alive and kicking for a little longer and we can tell them why we chose them that we didn't want AI. We wanted actual human beings in our community and then the fifth uh biggest con of AI and I don't have them ranked in order because who could but it's ethical considerations do we want to involve AI in life and death situations and who do we hold accountable that's my big thing for AI medical advice that goes wrong. The company that created the AI, do we sue them? The contractor that butts with the and packaged it up and sold it to the hospital or hospital administrators that forced the doctors to push this on people like who do we actually have to hold accountable? I mean there's probably a million more cons, but those are the five that stuck out to me.
SPEAKER_03You know who's gonna end up making a killing off of this that probably was a poor choice of words a killing yeah um the insurance companies that are gonna have to create all kinds of new malpractice insurance that you're gonna have to pay for yep you're right clank or malpractice insurance is going to become a thing. You're exactly right when you said that I'm like I hear cha-ching cha-ching you're exactly right that is because I mean you know look at the I mean the whole you know the whole the the American healthcare system is pretty much an insurance racket.
SPEAKER_05Yeah yeah that's so true we're we're literally the only place in the world who's just like you know like yeah let you know what let's not do that uh give people health care let's uh just uh let all the insurance companies run it and uh oh that'll go well we're seeing that well I'll make kickbacks and yeah I mean that's what we got that's what we've always had that's what we got now I mean we're not gonna get personal to personal medical stuff but Jesse and I know we know shit we've like been through real shit and I know about specifically the hundred dollar can of Gatorade that they bill insurance a hundred dollars for a can of Gatorade when they need electrolytes for a patient.
SPEAKER_03The system is broken even when your mom has Medicare and uh a secondary insurance policy yeah um when she gets cancer it's expensive I can assure you.
SPEAKER_05Yep and on top of it then there's the scare factor because you'll get bills that are astronomical and all these numbers and you're like wait what is supposed to be paid for by these things and they really it's intimidation it's terrifying and if you don't have you know somebody with that like legal eyes for it it can be terrifying because you're like well it doesn't matter Jen because now you're gonna call a number and like a clankers going to answer and it's gonna explain everything for you. Yeah exactly it explains it so it'll just be clear as day this is the best insurance question I've ever heard you're a you have the best hospital bill I've ever seen in my life those are the most zeros I've ever seen in my life oh my god you have done such a good job racking up bills for Tylenol and Gatorade yes and a fucking blanket you know oh my god it's so true so tech magic had even more um let's say peeks into the dark side of AI in medicine I guess they said that they were seeking to answer whether the healthcare industry can manage its dark side including issues around AI it can't manage itself now before the clankers I agree like they but they came away with a couple facts that I felt like weren't highlighted by other people and I'm I you know and I'm seeing it on Substack they're kind of blank spots one is a system outage could cause operational paralysis if workflows are built on AI right you have a bunch of people that don't know how to do their jobs because the machines aren't working so we're gonna be loot it's making us dumber and that includes in medicine and even with the most amazing medical professionals they're like we don't have our tools how do we do our thing and that's pretty terrifying they went on to point out that 66% of US physicians used AI in 2024 which was up from 38% in 2023. So just imagine it now doctors Google shit and now doctors chat GPT shit you know what I mean yeah well and also the FT which is fine everybody googles shit you know what I mean like like what if you're just like uh well you know I never did that one before um what what's the general you know what you know I mean everybody everybody yeah googles generally they've done the web and thing yeah and it makes but it makes you nuts though I mean I think that there's so many times that when you have a diagnosis that's not uh maybe simple like conjunctivitis I'm not dissuading people from getting diagnosed with conjunctivitis and getting the right eye drops but I'm just saying something bigger than that it's like uh the healthcare actual human beings will say do not Google it you'll drive yourself nuts and it will I know I at least they tell you that I know I have too too I have but it's so great to hear them in my head it's almost like that little angel on my shoulder going that's enough stop it but I'm scared because the FDA has already uh approved a hundred um a thousand AI tools authorized by the FDA that are in use right now and a lot of those are concentrated on radiology almost like those people can be paid off by tech companies it is but I mean that's that can't be right right that can't be a thing right it's almost like you can pay those people under the table and like they'll just let shit pass. No but that's not a thing no of course not that's your fiction mind that's just me being Jesse conspiracy again over here okay so I I had to pull it together with some cases in action these come from techmagic.com I've linked to their um article oh my god brace yourself I'd like I'd like to pray for everybody we're about to talk about ahead of time okay and maybe we will talk to them and their families because uh I know this is gonna be a doozy it they're gonna be doozies so the 2024 courier mail investigation researched ai powered medical medical scribes being used in Australian clinics these tools are designed to listen during patient consultations and automatically generate clinical notes that sounds so good right while they promise to save time for doctors the report highlighted two key risks first these tools sometimes produced inaccurate or fabricated which is lies details in the medical record this not only creates clinical safety risks but also undermines the reliability of the stored patient data. Secondly because these scribes process sensitive conversations in real time patient data may be transmitted stored or even used to retrain commercial AI models. The lack of clarity about where the data goes who has access how securely it's handled and whether informed consent was obtained raises red flags among clinicians and privacy experts. I was just thinking what if you are a victim of a domestic abuse situation and that has gone into and I'm gonna start crying right now and that has gone into some AI model to train them. It's like oh yeah now we can train and we know that they were saying this but there was more to it and this person you know was hit by their domestic partner or something. It's gonna keep people from going to the maybe the one person that maybe would the human being if it was just them stripped down in a room and say let's get you some help let's get you out of that situation because they don't feel like it's a closed private safe space anymore.
SPEAKER_03That just makes me so mad it's awful yeah you know they're in and they actually uh the everything keeps going back to the pit i mean they're they're on top of their game but like they they they they went through uh they they had a they had a uh a a sexual assault victim and they went into the whole like the whole process of over over a couple of episodes it it was like in detail like these are the protocols we have to follow and this is you know chain of custody of the because this is possible evidence and and like it was like the the I mean the actors did such a good job with it but I mean but it was it it was heavy.
SPEAKER_05I bet it was well we can lighten it up with the second one because it's kind of funny in a way it reminds me of a little kid or there was this game where um you would pull out these cards you'd play these cards and then there was this one card and you'd have to make up a word well in 2024 The Verge reported that Google's med palm which was later turned into Gemini which it is now hallucinated a non-existent medical term it was called basilar ganglia due to a typo in the training data.
SPEAKER_03So they were saying as you can see simple data errors can cascade into faulty recommendations if I get vascular ganglia does it tell you uh what what what I'm suffering from I don't know you have well now I probably they probably like changed it but let me go to the verge article and see what it told us to do. Like what like what have I got if okay here it is Google if a clan if a clanker tells me I got bascular what what was it ganglia or something?
SPEAKER_05Bas basal ganglia okay basal ganglia okay here's the headline Google's healthcare AI made up a body part what happens when doctors don't notice scenario a radiologist is looking at your brain scan and flags an abnormality in the basal ganglia it's an area of the brain that helps you with motor control learning and emotional processing it says the name sounds a bit like another part of the brain the basilar artery which supplies blood to your brain stem but the radiologist knows not to confuse them. A stroke or abnormality in one is typically treated in a very different way than the other. Now imagine your doctor is using an AI model to do the reading the model says you have a problem with your basilar ganglia conflating the two names into an area of the brain that doesn't exist. You'd hope your doctor would catch the mistake and double check the scan, but there's a chance they don't though in a hospital setting the basilar ganglia is a real error that was served up by Google's uh AI model Mad Gemini. Um it included the hallucination in a section on head cat scans. And nobody at Google caught it in either that paper or a blog post announcing it. Oh my God. When Brian Moore, a board certified neurologist and researcher with expertise in AI, flagged the mistake he tells The Verge the company quietly edited the blog post to fix the error with no public acknowledgement. And the paper remained unchanged Google calls the incident a simple misspelling of basal ganglia some medical professionals say it's a dangerous error and an example of the limitations of healthcare AI well I would say so I'm sorry.
SPEAKER_03It's kind of funny though I have to say it lightened it up a little bit just wait this one's kind of funny too can we stop can we can we keep this shit sandboxed until like we know it's not making up parts of the fucking brain that don't exist you know what and you know I was gonna say something but I don't even want it tested on animals.
SPEAKER_05I don't believe in animal testing so we shouldn't even allow it in the veterinary sphere until we like let's sandbox it I like that here's another one you can handle it. Yes or Galaipe says no she's not into she's not into that she's like keep your AI stuff away from me yeah she's like she she's she's been trained already not to trust client cars you've you've raised her right I know I know so it was hard uh here's the scoop if staff accept algorithmic recommendations without validation misdiagnoses or unsafe treatment decisions could follow of course here is a real world example of the negative impact of AI and that over reliance that we were talking about earlier. A 39-year-old man from California used chat GBT for health advice and was told to drink a mixture of salt and apple cider vinegar after consuming it for several days he developed hallucinations, paranoia and violent behavior caused by sodium poisoning doctors like real human doctors treated him in a hospital for acute psychosis and specialists later warned that AI chatbots can generate convincing but unsafe medical recommendations I wouldn't let that fucking thing explain to me how to put a band aid on you know what I mean like and as far as like mixing up some like healing brew you know of apple cider vinegar and what salt ever like where what like what was it on like you know like uh Appalachian folk magic you know or something that subreddit yeah that subreddit on yeah and then the tweets the subsequent tweets about it going grandma's old time remedies you know it's like yeah my dad who you know my I have two dads but my one of my dads the cure to everything was witch hazel put witch hazel on it oh my god you know that's funny because that was my grandma too grandma Freeman Oh oh yeah she put it on me for something I don't remember what it was like I got like a like a like a bug bite or something I I don't remember how the wave came into the you know but uh it was for acne I think it was like I was like goddamn that shit burns you know exactly she's like it's supposed to that means it's working I'm like that's exactly I swear to god it's on the bottle it's supposed to that's how you know it's working fucking hurts worse than the bug bite did like I should have raw dogged it you know what I mean I should have just rider died with the fucking mosquito bite I know but we do more damage because in our it when we sleep I feel like you know I don't I shouldn't say we but I scratch the hell out of it and make it so much worse so which hazel might do it I'm kind of jealous that you actually sleep well I told you that I would get back to this and I really do need to touch upon it because it reminds me of the crash test dummy moment where they were like wait the crash test dummies were only made I love that song but you remember there was like a moment where I realized I think it was published that they were tested for a certain average height and it was actually mostly like very tall men and they weren't for the average height that's why airplane seats don't work for me I'm thumbling up and my neck goes in my head and neck go in the very wrong spot and medicines like even the back of your Tylenol they don't specify for a certain weight and they're usually tested on bigger subjects than you so this goes back to what I was saying at the top about children. Margaret Lozavatsky uh she's a doctor and she was shocked why was AI warning her of an 85% mortality risk in a mostly healthy six-year-old patient the child was hospitalized, sure, but only with the usual non-life-threatening RSV virus. From her clinical perspective, she knew the AI's assessment was inaccurate. When she and some colleagues dove into the case they found that particular AI tool had never been tested on pediatric patients. Imagine the potential risk that could have happened if I used that value to change the way I cared for this patient, the doctor says, who happens to be the AMA's chief medical information officer and vice president of digital health innovations. She said this this is why you need clinicians in the conversation the AMA posted the item August 21st to get the word out about its toolkit designed to steer provider orgs through appropriate intake and evaluation processes for new AI tools. The helps include a working group with physicians and other health professionals. So it integrates delay integration delays are common especially in legacy healthcare IT environments they warn the frequent system hallucinations or incorrect outputs damage user confidence. Yeah no shit Sherlock reliability issues may force staff to double check outputs negating efficiency gains and put an asterisk on that because I found this next from InfluxMD clinical decision support systems generate such volumes of alerts that 90 to 96% are now routinely overridden by physicians.
SPEAKER_03Y'all should see Jen's finger moving around I'm so mad it's like uh ah I feel like she's gonna be like that that that girl from the ring and like her it's gonna come out of the it is you know it's gonna come out of the monitor.
SPEAKER_05We're gonna find out that I created a portal in my backyard somehow and if a dragon flies through I'm gonna try to film it. So here's the scoop some doctors receive a hundred to two hundred alerts daily leading to documented cases of physicians typing this alert is not helpful or entering random characters just to bypass the interruptions. The cognitive burden is what they're calling it of evaluating and dismissing irrelevant AI suggestions actually increases physician workload rather than reducing it. Let's see and then they talk about a specific model they had this sepsis model that was actually called Epic and in a room like a conference room full of people it epitomized a failure it generated alerts for 18% of all hospitalized patients while missing 67% of actual sepsis cases.
SPEAKER_03It's such an abject failure Jesse I mean that's good though we need to uh we need to make the doctors cranky about this because that's how this shit's not gonna catch on.
SPEAKER_05You're right and and on top of it do you know why I think they they did the whole pit thing I'm so glad they're doing it and uh they have the sub the sub topics about the AI is because you know Hollywood pushed back very early of like you're gonna replace all of us this whole creative process with AI, you know, actors you know stealing our voice stealing our image doing all these things and you know this is not okay. So then I love that now it's leaching into screenplays and other places where people are pushing back. So it's not just us folks that are going data centers are going to kill Mother Earth and and the people in the global south that had to like are still having to deal with trauma because they have to go through and vet these awful images and things uh text but it's like now we have a bigger pool of people like you're saying Jesse that are aware of it and kind of pissed off about it.
SPEAKER_03Yeah witch hazel sprinkled all over them you know it's it's it's that AI shit burns.
SPEAKER_04I yes it does burn. So have a real person diagnosis.
SPEAKER_05My whole thing with this is uh they they keep telling you like uh uh AI can make mistakes so make sure you double check it you know yeah oh how's about just fucking do it right the fucking first time exactly you know what I mean just just every everything's you've been you've been everybody's been charting and everything just fine up to now you know what I mean every time we try to make something efficient and uh I've seen you know I've seen this in having fucked around with tech and like you too yeah every time you try to make something more efficient it a lot of times it doesn't work you know it does the opposite it does the opposite you've made more work it gives you more to do well and like like I think it was last week maybe we talked about this um I forget the expert's name but he was a professor of something very distinguished and he was like we're all part of a social experiment and we didn't ask to be and now it's in the sphere of medic medical stuff it's like let's throw it on the wall and see what sticks but it's about our medicine you know our well being in our bodies it's isn't there isn't there plenty of other shit that they can wreck with this before they start fucking up people's like mental health let's make a list of things that it would be okay to wreck there's I mean a lot of things come to mind but yes I agree with you 100% and you know what that they make a strong argument for uh uh and and I I'm not gonna rule this out of like uh well you know ai could be trained to be really good to like find things that people miss and like you know in schemes things like that okay I'm sure you're probably right I don't think we're there no I don't not even three years ago nobody was talking about this shit and like now we're having it like oh you know what we should do it's like three years old now let's make it a doctor you know I well and and and it's like you know how AI slop um you know has made people question everything and they're like I don't believe anything you know I have to see it with my own eyes in person to believe it but they're doing that with the medical sphere too because I guess I mean it this feels like a little mini doom hole but there was a story that was going viral about this dog who had cancer and AI created this cure for this dog and he didn't have cancer anymore. Yay AI well you don't have to scratch the surface very far to realize the dog was already on two anti-cancer medications and they were using them on the dog. They've used them on humans and they were effective on the dog. And then the AI, what it did was next to nothing. It was like oh those two together might work and it didn't even come up with the right proportions.
SPEAKER_03But yet you have mix a little apple cider vinegar in with the dog's food and like that'll that'll fix it. You forgot the salt and the salt missing that's the missing component and the salt I gotta imagine that dog food has salt in it already you know I think I think that that's otherwise they probably wouldn't eat it but then you've never been attacked by a demon just pour a can of dog food and circle around you and you'll be okay.
SPEAKER_05But my problem with there's so many problems with what they were saying about misrepresenting what AI's role was in the healing of this dog that had cancer. But my real problem is Sam Altman and others going around on a parade of like this should be a thing. This is why we need to be funded more and it's disgusting you know he's one of those weirdos that doesn't like dogs.
SPEAKER_03You know what I mean? That's my feeling I bet you that guy doesn't have I bet he doesn't have a dog and like he lives in this like like giant meticulous concrete thing. You know what I mean? Yeah with like shiny surfaces you know fingerprints yeah nothing and like uh you know get gets his blood sucked out of him and then like shot back into him or whatever.
SPEAKER_04He doesn't have any sugar soda in his fridge.
SPEAKER_03Yeah he he has yeah it there's nothing in his fridge but like uh probably uh I don't know like like those micro doses of like fresh press juice yeah and like uh in like a a like a stalk of carrots or some celery or something you know what I mean and uh like he's he's probably got horrible ai art all over his walls you know what I mean that he paid like ridiculous amounts of money for you know like that like all that those NFTs yeah those those yeah those dumb things that's where they all went yeah they're they're all hanging in that dude's house you know they're and they're they're probably not even that like they're they're probably all on these screens you know what I mean that like uh and I'm and like I'm sure that like uh there's always like this sort of like weird like you know when you walk into the like the spa to get like a massage or whatever or siwe rap and like it's playing that we know it's atmospheric music it's just like really nothing you know it's yeah yeah it's it's not even real music. Yeah it's it's these are the people that are trying to get ai to be our our doctors and we should act or it's it's it's like something generates music and it's like uh like like wails fucking backwards or something.
SPEAKER_05You know what I mean just just like these weird like sound scam I picture I picture he has a roomba that has a fluffy tail on it and that is his dog.
SPEAKER_03Yeah probably has a name it's and it's yeah and it's he he yeah and it's it's named something like I I don't know some weird like a mythology name you know and it's like a cockatrice over there in the corner there is a crumb I've seen a crumb he loves it I think I hear my discord perhaps that's daddy carp.
SPEAKER_05Stop it Dr. Carp. That's but that's a big thing is uh you know I bet you and nickel they have a professional physician the the the billionaires in Silicon Valley uh based on my exposure to people aren't going to like a regular hospital no no they have uh I I guarantee they have um healthcare that comes to them they have a real doctor that comes to their place of course to to to give them everything that they need that's like their only he's their only client you know from let's take a cue from them and let's really just don't hesitate to actually go see real human healthcare providers maybe the people that are are are making these decisions on things should actually be using those services you know what I mean as the minimum bar of putting it out into the public sphere.
SPEAKER_03No none of them have ever rolled into the free clinic you know what I mean like no they have not or or even just like a normal doctor's office. Yeah they have they have no clue. No they don't they don't know it but we're letting these geniuses the the you know we're letting them decide how part of our lives should function our education yeah our healthcare exactly uh you know time on our devices yeah whatever the how we decide how we work you know I mean and they have zero frame of reference for all this exactly very sorry I'm still on a rant because I did my taxes yesterday and it's not like I don't yeah I I'm I'm God fearing American I I'm I'm paying my taxes the whole time I'm doing it I'm like I bet you anything I'm paying more taxes than Jeff Bezos is right now. You know yes exactly pisses me off you know I know me too me too but you know I'm like uh I'm like I'm like probably just paid more than like Zuckerberg and Elon combined. Yeah yep because you don't have um a fleet of accountants and offshore accounts and different things not just like loopholes and I don't I don't I don't have every senator in my pocket you know going um yeah you know what would be cool if like uh we didn't have to worry about paying taxes oh my god and they're like oh okay that that's uh yeah that sounds great uh I thought you're gonna do your Bernie again and make us all feel better now Bernie would be squishing back on he wouldn't he wouldn't put up with that shit oh no like you're going to pay your you're gonna pay your taxes you're gonna pay them I'm gonna make sure you gotta pay them that was your best ever okay well was it was that's a good way to end it yeah that was really good yeah I think it was really good well Bernie and Jesse and me go see real doctors this week you need them but stay healthy and get out there and just be healthy but if you're not feeling great don't hesitate but you know just maybe see real human beings Jen put the crash test dummies back in my head and been there since like the 90s now I wish we had a little Casio keyboard and one of us could play it right now.
SPEAKER_02Superman never made anymore saving the world from Solomon grande
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Paranormal Karen
Karen Rontowski
The Skeptic Metaphysicians | A Pragmatic Guide to Spiritual Awakening, Metaphysics & Enlightenment
Will & Karen | Hosts of The Skeptic Metaphysicians - Awakening Seekers of Spiritual Growth & Expanded Consciousness
The Homewrecker Podcast
Alex Arion & Monique Gisele
Octoberpod AM
Octoberpod After Midnight
A Psychic's Story
Nichole Bigley: Spiritual Guide, Intuitive, Energy Healer, Psychic, Medium, Teacher, Reiki, Intuition, God, Angels, Spirit Guides, Universe, Soul, Life After Death, Supernatural, Spirituality, Higher + Highest Self, Consciousness, Awakening) | Para Pods
Psychic Teachers
Samantha Fey and Deb Bowen
The Snake River Killer
Brandon Schrand
The Higherside Chats
Greg Carlwood
Because I Wanna Know
Leslie Fear
The Skeptical Shaman
Rachel White