Ep 183: AI as a clinical assistant, with Rebecca Fechner, Dr Joshua Pate and Dr Mick Thacker
JOSPT InsightsJune 17, 202400:26:4524.49 MB

Ep 183: AI as a clinical assistant, with Rebecca Fechner, Dr Joshua Pate and Dr Mick Thacker

There's plenty of work going into how AI can make health care better - including in recording consultation notes, or making early cancer diagnoses, or opening up low cost ways of doing musculoskeletal imaging.

The technology and applications of AI in healthcare changes just about every week. Today, we're exploring generative AI as a help, not a hindrance to musculoskeletal rehabilitation practice.

Physiotherapists Rebecca Fechner (Queensland Paediatric Persistent Pain Service), Dr Josh Pate (University of Technology Sydney) and Professor Mick Thacker (Royal College of Surgeons Ireland) talk about ways to use chatbots and generative AI to generate ideas and solve clinical problems.

------------------------------

RESOURCES

The Prompt Engineering Guide is a deep dive into how to craft effective prompts for generative AI: https://www.promptingguide.ai/

More on the science of active inference: https://pubmed.ncbi.nlm.nih.gov/25563935/

[00:00:00] Hello and welcome to JOSPT Insights, the podcast that aims to help you translate quality research to quality practice. I'm Claire Ardern, the editor-in-chief of the Journal of Orthopaedic and Sports Physical Therapy. It's great to have you listening today.

[00:00:23] The technology and applications of AI in healthcare change just about every week. Today we're exploring generative AI as a help, not a hindrance, to musculoskeletal rehabilitation practice. Physiotherapists Bec Vekner and doctors Josh Pate and Mick Thacker have written for the

[00:00:40] JOSPT blog about using AI chatbots like ChatGPT to generate ideas and solve clinical problems. They join me to discuss all that and more today. Bec is a senior physiotherapist who has over two decades of experience in musculoskeletal physiotherapy.

[00:00:58] She currently works in a complex pain service treating children, young people and their families in Australia. Josh is a physiotherapist, PhD researcher and author of a series of books for children about pain. He's a senior lecturer in physiotherapy at the University of Technology Sydney in Australia.

[00:01:17] Mick is a professor of pain, physiotherapy and rehabilitation at the Royal College of Surgeons in Ireland. He consults on digital transformation and digital therapeutics. And Mick has co-designed and prompt engineered three large language models trained on pain neuroimaging and active inference predictive processing.

[00:01:38] Bec Vekner, Josh Pate and Mick Thacker welcome to JOSPT Insights. Thanks for having us. Hi. It is so great to have you all. A three up on the podcast today talking about artificial intelligence in education and healthcare is a very special treat.

[00:01:55] So thank you for making the time to join JOSPT Insights. AI has well and truly arrived in our mainstream lives. Most people who are listening to us today will be probably very familiar with chat bots like chat GPT. And that's a form of generative AI.

[00:02:12] We're going to talk about what generative AI is in a moment. There's plenty of work going into how AI can make health care better. And that's including in things like in recording consultation notes, in making early cancer

[00:02:26] diagnoses or even opening up low cost ways of doing musculoskeletal imaging. Now, Bec, you and Josh recently used generative AI to generate ideas about multidisciplinary pain science education, which is a great paper that we've published in JOSPT Open. And Josh, you've written a blog about it.

[00:02:45] We'll link to both of those publications in the show notes for people who want to follow up. What is generative AI, Josh? I think we should start there and then we'll get into the paper and what you found.

[00:02:58] Yeah, sure. So these chat bots are just like text prediction machines. So they're just kind of effectively guessing very, very well what the next word should be. So it depends a lot on what input you give it in terms of what output you

[00:03:12] will get. And so I guess for the context of this conversation, we used it for brainstorming. And I suppose that's that kind of idea of a brainstorm assistant. So it's not taking over your job, it's helping you to think outside the box.

[00:03:24] And one thing about these systems is because they have so much very general data is that they can join dots that don't make sense to join and they do it very confidently. And so it comes across like a new idea or a good idea.

[00:03:38] And the confidence is a very interesting and ethically challenging sometimes issue with generative AI. We can talk about that a little bit later. Bec, before we get there, tell us why did you choose to use generative AI in this way and what did you find?

[00:03:55] The reason why we really went into the generative AI was because we really wanted to make our research co-designed, but that has so many challenges with funding, just the logistics of getting people together. And so partnering with the people that we are researching with.

[00:04:13] So we're in health and we wanted to partner with education sectors and policymakers. We couldn't afford it. So simple as that, we decided to see if we could simulate a workshop using AI to bring those voices of education and policymakers together with health.

[00:04:27] First and foremost, we found like fast, exciting results. That was super cool. We used a couple of chatbots to try and get so that we could compare and contrast between them. And we found that there were differences but also similarities in the answers.

[00:04:42] And that was just a really nice place to reflect on as clinicians and researchers. So our findings really, they were similar in the end, but the reflections that happened along the way were really different and interesting. Josh, what does it mean that these chatbots, different chatbots had

[00:04:59] different and similar outputs? Yeah, I guess they're trained on different data sets. And so they're just coming up with different ideas along the way. So for instance, we said, who are the people we should have in the room

[00:05:13] for this workshop? And one chatbot comes up with a really big list and explains the role of each person. And some of them add links and some don't. And I guess what's really interesting is we ran this in September 2023.

[00:05:26] And even since then, like the names of the chatbots have changed, the freely available ones have changed. The abilities of them are like so, so much better. And so some of the work already is dating quite quickly.

[00:05:38] So tell us, Josh, a bit more about what this idea of a brainstorming assistant really means. Yeah, so I suppose it's just like when you're in the lunchroom between seeing patients and you have an idea and you're talking about something

[00:05:51] or you're reasoning through something, it's just having another person in the room. It's hard to kind of not personify these chatbots because they're it's a conversational tool. And I guess you're talking to someone who just has like unlimited, well, a very, very large limit on the information.

[00:06:11] But the filter is not there like that. They don't have it doesn't have the content expertise. So you need to make sure you give it the frameworks. And so, for instance, like, oh, we want to work within this this model or this theory or this way of thinking.

[00:06:24] And we're in this context. So we're in Australia or we're working in these schools or we're working in this state. And I think some of those rules, because it's all published online and the data sets are being trained on everything,

[00:06:35] you can kind of be guided and a bit more specific. So then the idea is more likely to translate into the real world. But again, like we need to externally validate anything that's coming out of these systems.

[00:06:48] Now, Mick, you've been around the traps for a little while in musculoskeletal rehabilitation. What are some of the different ways that you've been thinking about how AI might show up in our broad clinical field and in the research? What are you seeing is happening now?

[00:07:04] And what do you foresee might come through in the future? And as you say, Josh, the future is in some ways in the next month. The first thing to say would be the still reluctance for many clinicians from our background to engage with AI and generative AI.

[00:07:22] And I think that really comes down to the ontology of the person wanting to become a physical therapist, physiotherapist, because they want to sort of touch people. They want to interact with people. They want to be, if you like, physical in that approach.

[00:07:35] And some people are concerned that this takes the person to person interaction out of the equation. And as Josh said, it can be tempting to think of these things if you give them a name

[00:07:47] or someone gives them a name to think of them as people and they're not that. So that worry is sort of definitely there for a lot of people. And I and it's understandable.

[00:07:59] But I think once you start to engage, you sort of see that you can utilize the best of the machine, the best of the AI together with the best of the human. And I think that that's something that we have as a powerful advocates

[00:08:11] of that you don't replace anything. Thank you. Use these things as an adjunct in terms of MSK practice. That means you can add them as an adjunct if they're trained on specific data and ethically trained. That's the other key consideration there for the clinical arena.

[00:08:28] If they're ethically trained and you know what their limitations are, I mean, you can use them to sort of triage people to plan treatments, to give recommender systems so that they make recommendations of what's likely to benefit people.

[00:08:45] And obviously note taken, as you alluded to, you can with natural language program and you can actually talk to these things and they record notes. And that science and technology has advanced exponentially where you used

[00:08:59] to have to sit for hours talking at the thing and trying to get it to understand you. Now they'll pick up on your voice very quickly and then go into the future. I think the new prompting sort of systems that we've got in place

[00:09:12] allow them to be much more interactive, much more thorough. And I think eventually we will integrate these things into the concept of digital twins where we can build a digital, deeply phenotype machine that mirrors the person we're looking at and offer that sort of model

[00:09:32] of the person treatment and see what the outcomes are and then better select a treatment and then apply it to the actual individual. Digital twins are here in many other arenas and beginning to break through and certainly pain management, if not quite MSK management at the moment.

[00:09:51] Bec, how do you see AI showing up in your clinical practice or how are you thinking about using it in your clinical practice? Perhaps how are patients that you're interacting with bringing AI applications into the clinical environment?

[00:10:06] I can give you an example of a patient that I've just seen recently where we interacted with AI through the process of their treatment. So I work in pediatric pain. So I treat children and young people with chronic pain.

[00:10:20] And a part of the treatment, something I'm very aware of, is trying to co formulate and co generate hypotheses for why the pain might be occurring in any given moment, because that empowers the individual to be able to then seek their own solutions or seek help

[00:10:37] to get for their own health care, because we want them to grow up as adults that are empowered in their health care and with good health literacy. So this young person came to me and like, this is not too dissimilar

[00:10:49] to the last 20 years or so, me negotiating Dr. Google in the clinic. It's just a much more powerful conversation, I think. This young person came to me and they said, look, Bec, my pain was just so much worse when I was sitting in a classroom,

[00:11:04] but I played a full game of soccer that day and it was not a problem at all. And I said, well, let's hypothesize together. What do you think could have been going on? And eventually the conversation got tricky.

[00:11:15] And so we said, well, let's bring another intelligence into the room. And I don't know whether we should be calling AI and other intelligence and personifying it. But this is what we did with the young person. And they were so excited.

[00:11:28] They were asking, they were using copilot on the day and they were asking copilot, you know, what do you think this content, how could it have impacted my pain? These are the things about my pain today.

[00:11:40] Why do you think it happened in this situation and not this other? And we were able to have almost like a three way where AI was providing a lot of the knowledge base about pain, neuroscience and all of that stuff.

[00:11:52] And we were able to interact with more the individualized context and the human side of it. So both of us, so the adolescent and myself could let go a lot of a lot of like the knowledge. And I think that was super powerful because knowledge is power.

[00:12:07] And so I'm constantly aware of how much knowledge I have in neuroscience and how that can play out in an interaction with a young person and how they might lose this sense of power because of that level of knowledge.

[00:12:18] And I was able to kind of let that go and come right back to their level. And the human side of it was even more powerful because we had some something else holding a lot of that pain, science, knowledge

[00:12:28] and in a different way to how I had done previously. That's a great application, Beck. Thanks so much for sharing. And Josh, I know you do a lot of work in education. You're teaching physiotherapy students and the future physiotherapists.

[00:12:43] So how do you think about bringing AI and these sorts of applications into your teaching? Even just with Beck's example there, I think it is somewhat similar in the education sector where fact checking is so essential. And we were even looking, we have another study that we're doing

[00:13:01] with AI where we're looking at the reliability, where if you keep giving like you clear your history and go again and do the same conversation over and over in different chatbots, some of them are very, very inconsistent and others say the same thing every time, like,

[00:13:16] or very, very similar. And I think that lack of perfect reliability is worth keeping in mind, because if you have a patient or me, a student who's accessing as if it's Google or a search engine, there's a huge risk there

[00:13:31] that they're going to get the wrong information and it's going to come across extremely confident. But from a from a knowledge point of view of like how correct is it? I find that really interesting in my teaching.

[00:13:40] And so one of the ways we do this is rather than using it as like a here's a perfect new system that's come out, we use it as a hey, we're becoming content experts students. Let's try and critique the system.

[00:13:53] So we'll compare what comes out to the latest paper that we just read or the latest study or the latest textbook or whatever it is. And the cool thing is you can say, hey, generate a case study

[00:14:02] of a patient in this scenario and quiz me after every paragraph, make it multiple choice, use constructivism, base feedback. I want to feel encouraged by the end of this interaction. And you can give it all these prompts.

[00:14:16] And we do this live in class and then the students all yell out A or B or whatever. And it becomes this really interesting moment of going, oh, why? Why wasn't it able to generate a good multiple choice question about that topic?

[00:14:27] And it's like, well, because there's not solid data on whether or not that treatment is effective or whatever. And we're able to have this extra layer of confident discussion, I suppose, around our uncertainty. Like it is clever, but it just doesn't have these guard rails

[00:14:42] of all the other technologies that we've kind of grown accustomed to over the last few decades. I think that's a nice segue. I'm going to come back to prompting. It's, but you've just left a perfect segue for me to ask Mick

[00:14:54] about the ethical aspect of AI and particularly generative AI in the clinical context. Mick, you talked about how these models are trained. So tell us a little bit about that and what the what what listeners should look for if they are thinking of using some of these

[00:15:10] chatbots in their clinical practice. It largely depends. It depends on the type of chat chatbot and it depends on size actually. So a lot of what we're discussing and perhaps what is emerging from this conversation is more agentic.

[00:15:28] So agent like smaller chatbots, actually, rather than these massive things. Everyone has this impression that the bigger, the better, the larger the data set, the better it is. And of course, to a point that is true, but also there's a space for more focused, more highly trained chatbots.

[00:15:49] But then there's a greater possibility of bias built into those. So you're constantly playing this cost benefit between the data size and if you like, the spread of data and then its precision and its accuracy and then bias, et cetera.

[00:16:07] So there's a real sort of complexity to how you build a perfect chatbot and they just don't exist at the moment. What we really get into is is to a point where I think we actually need to engage, interrogate these things.

[00:16:24] And part of interrogate them is training them. So they learn whilst you're interacting with them. But again, problems with things like chat GPT early on were people asking an old range of sort of very dubious questions

[00:16:40] and having the ability to tell chat GPT, et cetera, that it was wrong when in fact it was it was largely right or putting their own bias in. Unfortunately, the early versions of these things were probably sexist and racist, probably reflecting the type of people who trained them

[00:16:58] in the first place as real efforts been made to overcome that over the last sort of three or four years successfully. I think there are some ethical issues. I mean, I don't think we're anywhere near the singularity and these things are going to take over.

[00:17:13] I mean, I think we wrote in the blog that you're always in control of these things. You can choose or to accept or refute an answer. And that's crucial to a good ethical use of these things. They're great points and a really good starting point.

[00:17:29] We could speak for a whole podcast simply on the ethics of AI broadly and then get into the different parts of AI. So that was great, Mick. It begs the question, I think we talked about prompting before. And I'm really interested in your clinical example, Beck.

[00:17:46] How did you go through that prompting and what are the tips what are tips you would share with listeners about how to get the most out of these generative AI tools or these chatbots? Yeah, I think in my clinical example, it was really led by the young person.

[00:18:03] So we started off with a prompt that was too big and too broad. And I was able to then have some curious questioning of, do you think maybe we could ask about this? Or when we think about we use this thing called brain mapping,

[00:18:16] which is basically pain neuroscience in practice. When we think about where pain, how we might experience pain, what do you think was important? Let's ask AI what they think was important, said they. And the young person was then able to get almost like another

[00:18:34] another perspective and then we can work through. So our prompts were really just getting smaller and smaller and smaller. So it was really taking a curious approach and a reflective approach and then trying to for me, then trying to guide the young person in a process.

[00:18:50] And that process was for the young person. What are the strengths that you have? And then using AI to help draw out the strengths even more. It sounds very hypothetical, but it's actually quite easy once you get going. And it doesn't matter if you make a mistake

[00:19:05] because you can always backtrack. You're in control. So you can, you know, for me with the young person, it's do you think that's important? Do you think we need to think about that? Let's change track. Let's look at something else now. Let's go back to what you thought.

[00:19:20] Yeah, it's very much that as you framed at the beginning, having another quote unquote brain, we can we can put aside the debating the issue of that term. Another thing in the room that can bring different perspectives and things that you might not have thought about before.

[00:19:35] There's a few different ways of approaching this, and there's some really nice guides online. One of the maybe some take home message could be to go and look up chain of thought prompting. And basically, it's like giving a step by step process

[00:19:46] of what you wanted to think through. And then that way you can critique each part of the reasoning that's gone into the answer that you finally get. Whereas if you give it a one line prompt and it gives you a one line

[00:19:58] answer, you don't really know the nitty gritty behind all of that. And so I guess, yeah, that's like one big thing. And then the other thought to consider is humans are like in terms of idea generation, we're pretty good at coming up with a couple of ideas

[00:20:11] and they're usually quite good quality. And then our ability and our fatigue, like our ability plummets and our fatigue sets in. And we kind of just stop generating ideas at that point. Like most humans would be in that boat, I think.

[00:20:22] Whereas these tools are kind of exactly the opposite. So their ability remains pretty constant. And it's not not necessarily as good as a constant content expert. But it doesn't fatigue and it just keeps going. So rather than saying, oh, can you give me three potential?

[00:20:39] I don't know whatever it is, diagnoses, ideas, whatever you're thinking through, you could ask for a hundred or a thousand and start to see when it starts repeating itself. That takes a bit of getting used to. I remember when at first I first started experimenting with this stuff.

[00:20:54] I read something online and they said you just need to spend like two full sleepless nights playing with AI until you fully start appreciating how capable these models can be. But I think it is about that, like once you've done about 10 hours

[00:21:06] of mucking around with it and trialing different things and testing, oh, it doesn't know this these facts or anything before this date. It's not very good at whatever it is. I think that's where like we want to have our critical thinking lens on pretty early.

[00:21:21] And it's not going to again, it's not going to give you the correct answer or it's not going to give you the perfect thing just right then and there. But I think it will get you part of the way there or it might help you to think

[00:21:31] outside the box if you're trying to come up with a creative solution of how to implement something with low resources or something like that. Like that's where it seems to be able to go. Oh, it's almost like someone's telling you, hey, my neighbor did this in computer science.

[00:21:46] Like, why don't you try that in health care? It's that kind of weird conversation where you're like, oh, I never would have thought of something like that. And then you ask it more follow up questions. That would be my main encouragement would be to keep asking one more

[00:21:57] question than you feel comfortable with. Like just double check and ask one more round and say, what do you mean by that? Or like, can you add a little bit more or add some references or give me links to read more about this?

[00:22:09] Now, Mick, I know you've done a lot of thinking and work in this area about prompting, so let's hear from you. I'm really keen to hear from you about what your thoughts are and tips

[00:22:18] for folks listening to us today about how they can get the most out of these tools and chatbots with some careful crafting of their prompts. I think the thing to say is that these things are moving on all the time.

[00:22:31] The big word everyone needs to bear in mind is context. They set in a context for these things, even simple prompting. They don't give you particularly good and efficient answers. If you ask what is or explain, you know, they need much more dynamic action verbs and a context

[00:22:49] and what you would like and what format you would like the answer to be in. But I think with all this prompting is that there are better prompting methodologies for particular outcomes that you desire. So I think that really spoke

[00:23:07] brilliantly about what you could do in the clinic as both Josh and Beck were talking, I'm thinking of some of those sort of some of the world leaders in our Wigan and move into sort of AI and make it much more like our

[00:23:20] human beings sort of function and sort of interested in this thing called active inference. And actually, there's a couple of classic papers that we can link to the listeners about how we improve communication by updating models.

[00:23:34] Beck's got ahead of the game because she's got sort of three agents, herself, the patient and the chatbot all interacting with each other when we communicate effectively. We're constantly in this process of making a statement. The other person's got to receive that update, their model often,

[00:23:55] then they will talk back to you, reflecting that they've understood what you've said and then you update your model to come in line with them. And that gave a great clinical example, which it actually done that with a real person and included this bit of technology

[00:24:11] in sort of a duet to the triet. Trio. Yeah. Yeah. Thank you. This has been such a great discussion. We've covered so much ground on AI in health care and in musculoskeletal rehabilitation practice and education.

[00:24:27] I want to give the last word to you, but people who are listening and still a bit worried, uncertain about how to bring AI into the clinic, maybe even whether to bring AI into the clinic. And this concern that AI is going to take over my job.

[00:24:41] What's your pitch to those people for trying out and getting getting their feet wet a little bit with new technology? If you're not in the game, then how will you even know? And I guess if you're worried about AI taking over,

[00:24:58] the alternative to that is actually changing access and equity for people because of AI. So you're never going to know whether you're servicing your patients as health care providers, unless you're keeping up with what everyone else is doing.

[00:25:16] So I think that from all contexts, but I think that we need to think about the marginalized people as well as those who are really advanced in AI and for providing health care, we just have to be in that

[00:25:28] and keep playing with it and figuring it out to know how it works. It's a bit like, as I said before, Dr. Google, we had to play and see what was out there to be able to know how to interact with Dr.

[00:25:40] Google in the room, because they're always Dr. Google's always in the clinic with us now. AI is going to be in the clinic, whether we like it or not. So we know we need to know how to interact. That's a great message.

[00:25:51] Great place for us to finish on Beck Fechner, Josh Pate and Mick Thacker. Thanks for joining me on JOSPT Insights. Thanks Claire. Thank you. Thank you. Thanks for listening to this episode of JOSPT Insights for more discussion of the issues in musculoskeletal rehabilitation

[00:26:13] that are relevant to your practice. Subscribe to JOSPT Insights on Apple podcasts, Spotify, TuneIn, Stitcher, Google or your favorite podcast app. If you like JOSPT Insights, help others find us. Tell your friends and colleagues and rate and review us.

[00:26:30] To keep up to date with all the latest JOSPT content, be sure to follow us on Twitter. We're at JOSPT and Facebook. We're JOSPT official. Talk with you next time.