Stan and Clarence chat with Dr. Jigar Patel about the growing use of artificial intelligence (AI) in healthcare.
Dr. Patel serves as the Senior Director in Product Management for Healthcare at Orcal. Throughout this time, they have developed a deep understanding of the technical needs of healthcare, including AI based products. Dr. Patel is especially passionate about using electronic health records (EHR) and associated technologies to benchmark and improve outcomes across all medical specialties and venues of care.
Listen along as Dr. Patel shares how AI is shaping modern healthcare.
Join the conversation at healthchatterpodcast.com
Brought to you in support of Hue-MAN, who is Creating Healthy Communities through Innovative Partnerships.
More about their work can be found at http://huemanpartnership.org/
Research
What is artificial intelligence? https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/artificial-intelligence-applications
Why is there a concern for its use?
How does AI play into public health?
What can listeners do to be more informed about AI?
Are there Chronic Disease arenas that depend on it more?
https://healthitanalytics.com/features/howto-useartificial-intelligenceforchronic-diseasesmanagement
Vehicle for effective communication to the public on a wide variety of health issues?
What dangers must we foresee?
Stan: Hello, everybody! Welcome to health chatter. Today's episode is about artificial intelligence, which is becoming quite a complicated issue. In a variety of venues. But certainly has strong implications in the healthcare arena. We have a great guest with us. We'll get to that in just a second. I wanna first of all, thank our illustrious crew, Maddie Levine Wolf, Erin Collins, Deondra Howard, and Sheridan Nygard who do wonderful behind the scenes work for us, providing Clarence and I, with some good background, research and ideas to talk about on all our showers. So thank you. Thank you. A lot to you guys shared and also helps us with our marketing. And then Matthew Campbell is our production manager. Make sure that these shows get out to you the listening audience in a crisp clear way, so thank you to everybody. Then, of course, there's Clarence where we do this hand in hand. And we've realized that, boy. We know we know a lot of people in the healthcare arena. So we've had a lot of guests on our shows, and it's been. It's been a wonderful, wonderful experience. So thank you, Clarence, your good voice to the healthcare arena, and thanks for being with us. Then, in addition, human partnership is our sponsor for these shows. Great community engagement group in Minnesota, and they are involved in a lot of great health related issues at the community level. We thank them dearly for being our sponsor for these shows, you can check them out at human partnership.org. So with that, we're gonna get into artificial intelligence. And we've got a great guest with us today who actually came to my attention. Through a colleague of ours Archel Giorgio, and a connection through Oracle and apparently Dr. Patel's team owns an oracle strategy, and he can. He can talk about that AI strategy a little bit. But I'll let him introduce himself, and then we'll get going. Dr. Patel.
Dr. Patel: Yes, thanks, Stan, thanks, Clarence. Thanks. Team for having me, my name's Dr. Jigger Patel, as Stan said. Been at Cerner, now oracle 16 plus years. And II started as a a pathologist. Transfusion. Medicine was my sub specialty in clinical practice. I was at the University of Kansas Medical Center before joining Cerner. Kind of lucky happenstance in my career. I was in Kansas City. Cerner was based in Kansas City. I have an engineering undergraduate, so I was kind of, and as a pathologist, always involved in informatics, in mo. Most of my time at Cerner. I was on the client side. I did sales implementations. The whole 9 yards of client interactions led various groups from a chief medical officer responsibility. And then, 8 years ago, I finally dove into product management and joined a team that is composed almost entirely, except for myself, of legacy, oracle people. So I was a stranger in a strange land. I was the only clinician in the group, and I was the only legacy Cerner guy in the group, so that had a bunch of different implications but then I got to dive in and understand artificial intelligence and our strategies going forward from a cloud delivery perspective. And then how we're gonna bring it to healthcare specifically so very excited. I talk about this topic all day every day. So I love talking about it and I'm happy to be here and talk to you folks.
Stan: Thank you. Tha really greatly appreciate you being on, on, on health chatter with us today. So I'll kick this off by first starting out, you know. Not. You know, there's been a lot of chatter about Artificial intelligence, and frankly, I don't think many people even know what it is or the logistics behind it overall. So let's start there. So we all have kind of a common denominator. As we talk about this, what exactly is artificial intelligence?
Dr Patel: Yeah, artificial intelligence is a machine mimicking human capabilities. And so that one simple example of how to think about that is. how do I understand someone you stand talking to me? That's speech. AI, how do I take a conversation or voice and turn it into some digital text? And then that. So that's one example. How do I understand? Once I've got that digital text, then I can apply for a language service. How do I understand that text in some way? Other kinds of what I would refer to as pure AI services are things like vision. How do I take an image or a video or something and perceive it? How does the computer machine perceive it in a way that's useful and interprets like a human would?
Stan: So is it useful at the individual level? Or is it more useful or as useful at the professional level?
Dr. Patel: It's both, I think. There are lots of people that are starting to organize their lives, and what the services I, the AI. Services I refer to are kind of like, I said, pure. The real power is when you start to aggregate them together. If you take speech and language and generative AI, and I say, Okay, I understand what you're saying to me. I interpret that thing, and then I create content from it, using a generative service. That's when you start to get real power. Another simpler example is document and understanding. Using a vision service. I understand what's on a page. I take the text office off it, turn it into language, and then interpret and codify in a way that can be reused. Now. Individually. I was thinking about this this morning. I was thinking, hey! Why am I not using my own? Why am I not using AI services in my own organization of myself, and augmenting me and automating some of me in a way that's useful. And so I have to sort through that from a personal philosophical level first, but then on the professional side. you can think of it as a you know, it's useful to a professional. But then it can be useful to larger and larger groups, as you think about the density of data and amount of data. And how do you look at those things and understand those things in a way we've talked about this in analytics a lot. How do I roll up dashboards and those sorts of things? The same concepts apply to AI. So I think to your question, you can apply it to a very human individual level all the way through to massive organizations. And how do they organize operations and other aspects of their business as well?
Clarence: Yeah, Dr. Patel, thank you for that. III you know, Stan, you started off the conversation many people have when they think about AI, they think about Westworld, or they think about some other kind of a movie that they've seen. It's not necessarily a positive one, because, you know, the AI robot, whatever. That's something really interesting. So my question to you is. how do we explain this in a way into the community where they see the real value of AI. You would want to use it. Okay? And but there's always that underlying fear that this is something that's going to take over. So how do you? How do you address that?
Dr. Patel: Yeah, the joke I have is Skynet is here, and it's coming. So watch out for your terminators there, around the corner. I have a friend. Who will she be? When she talks to her? Alexa, she says, please, and thank you, because when the AI overlords come, she wants to be thought of in a good one. So she wants to be polite now. So they like her later. So it's definitely there. There's a lot to think about from a safety perspective. Putting guard rails, I think some of which you've heard from the Elon Musk's of the world and others is that the government needs to step in and organize this and keep it in the box, so to speak. Because unfettered humans will be humans and push the limits on all of it in healthcare. Specifically, there's real jeopardy when you think about hallucinations. bias, and other things that can be introduced, not necessarily intentionally, but unintentionally right. That could lead to misdiagnosis and could lead to the wrong treatment. Plan. So I, in particular, when I talk to clinicians about this, I say. let's think about automation first. How do I make things useful to you today that you validate and no right creating a note out of other text that may be in the chart is an example. The clinician is still responsible for the validity of that information as soon as I start making recommendations for treatment. That's where we get into funnier territory. Right? That's when you start to get into jeopardy. When you think about safety where you think about guidance? Fda does this for medical devices? Right? And so it will be. It feels like an inevitability that government. FDA, in the healthcare spending space, will have some oversight, like they do for medical devices or biologics, etc., in making sure they're valid and useful now on the counter to that 70% of what doctors do. This is a colleague of mine that I've known for a long time. Who's the Chief Medical information officer, a very large health system in America. He's like 75, 70% of what we do in primary care is known. Let the computer help automate that so that I can use my physician brain on the other harder 30%. So what's the advantage to the clinician? There is that, then, that clinician can unburden themselves of that stuff that they don't necessarily need to recall makes their day easier. But then really apply their intellect and that knowledge, and then years and years and years of training to those other things. So there's going to be a totally roundabout conversation, clearance. I apologize. But when you think about the evolution from the end point being safe in those sorts of things it's going to be a gradual evolution to that problem. And we gotta be very cognizant along the way of how we're doing that and make sure the human stays in the loop. And their intellect actually applies. II I'm most fearful of humans just wiping their hands of it and letting people just let the AI just do things for them, and there are some that do that already, and so that makes me fearful for us.
Stan: But let me let me ask a couple of quick, you know, it's kind of like I think I'm trying to put my head around the idea. If someone knows nothing about artificial intelligence. And then all of a sudden we're kind of thrust into this thematic chaos of it all. And the complexity of it all. So let's start out with an individual from a healthcare perspective. Alright. So let's just say you're diagnosed. An individual is diagnosed with a particular chronic disease. How is it at that moment that they might be able to utilize artificial intelligence to help them?
Dr. Patel: yeah, one of the first ways is artificial intelligence and generative. AI in particular, is good at doing summarizations. It's taking a lot of different sources and pushing them together in a way that provides a a breadth and depth of information that a static source might not have. Now, the other thing that's interesting about generative AI is that you can also use it in chat. Gpt, in particular, is very good at this, in that it can actually tune up or tune down the literacy level of the output. So giving me somebody that's graduate school. Educate yourself as well and say, Okay, normal, patient education comes at a fifth to eighth grade reading level. Okay, for me, that's like, I don't even bother looking at it right, because I know way more than that. But if I do that, can I tune it up and give me more information at the level, I'm going to understand is really quite powerful. And similarly, on the downside of that. And one of the things doctors do well or badly is explaining to patients right? And if you can tune it down to their level, because I've often one of one of my colleagues here calls me, the chief explaining officer, because I do a lot of explanations around this stuff, and I do it. But II break it down so that I don't try to be condescending, obviously. But I wanna make sure people understand the basic building blocks of the concepts we're talking about. And that's hard for physicians. But we're trained in the fancy words we've learned right. But when you're talking to a patient you start throwing fancy words, they don't get it right. They, you lose them quickly. and the good clinicians then figure out a way to, to, to make it simpler for those that, and getting to the level they're at AI can do that on the fly. In a way that's unique and fast that we can't imagine doing. And we can do more uniformly. Similarly. it's also people. There's been studies and other things. People actually think the responses about AI are more empathetic than their providers. So it gives you, Literacy. It gives you empathy. It gives you different things that you're like. Wow! Didn't think about that at all.
Stan: Yeah. So it's If I'm hearing you right? It's almost like an easy-access tool in order to communicate at higher levels or at lower levels. If you need to, depending upon who you're interacting with?
Dr. Patel: it can absolutely. So that's it's very powerful that way. And there's ways to have it do things that take work from a human mind perspective. It can be done instantaneously.
15:30
Clarence: So you know one of the things, and I appreciate this one of the things. And again, you know, I read, I read some things, and as a community member. I recently was looking at an article, and you use the term hallucination. Okay, there was recently an article that shows some computer-generated bodies and one of the concerns was that you know, using AI. We were going to create a false perception for younger women about their bodies, and we were already working with body shaming. You know we always talk about health and Those kinds of things! What kind of a clinician? what kind of conversations do you have with people about the use of AI and utilizing it for those types of things, and not allowing it to create a false hallucination for you about how life really is.
Dr. Patel: Yeah, I mean, we're already even without AI, we're already there, right? Social media, and which there is a fair amount of AI in social media algorithms, a machine learning sort of thing. We're already there. And so it behooves the public to understand the technology and how it can manipulate you without you realizing it. Right? And some companies are using it to sell you things. They're using it to grab your attention to things. It's the proverbial rabbit hole AI can only make the rabbit hole worse. And if you're in a certain mindset going and following the rabbit down, the whole becomes easier and easier. We're already in a place where that's easy, right? It's gonna get worse. So we have to educate people on the implications of this. I'm in This. This goes to training of medical students. Right? Is a core example. I'm worried about giving AI to medical students because they don't learn to think like doctors which is important. First and foremost, as an example, we have the technology. We're we can record a doctor-patient conversation and then create a note for the doctor. I'm I'm the first. I told my product management team do not put this in the hands of medical students because they're synthesizing something that their brain needs to hardwire before they become physicians. So it becomes problematic that way, too, in that we can. The hard work of becoming a doctor or the hard work of being an expert in anything can go away. And that's detrimental to us as a society and as a human individual.
117
00:18:19.313 --> 00:18:20.812
Stanton Shanedling: So let me let me kind of follow up there. There are a couple of themes that you brought out here. Training and education. Now let let's let me separate that a little bit. Let's talk about you. You alluded to it training for professionals. In this case, let's call it healthcare professionals. How is it that we do that? How is it that we really get the existing healthcare professionals up to speed. How is it that we get newly trained healthcare professionals and I don't care whether they're physicians, whether they're public health professionals, whether they're allied health professionals.
123
00:19:04.763 --> 00:19:08.993
Stanton Shanedling: all of us. How is it that we get there? trained in their schooling, or how is it that we get them integrated if they haven't used it at all. So you know, II I'm sure you kind of touch on this.
Jigar S Patel, MD (Oracle): Yeah, it's a it's a hard problem. Right? Let's start with the the seasoned people work our way backwards. In. you know, being an Emr electronic health record guy for most of my career. We had this problem with Emr's in understanding the technology. When I started the job was, there are some docs you're gonna have to train to use a mouse exactly. It still exists right? Exactly. Not not as much now, but 15 years ago, absolutely. We had to worry about that. So it's gonna be. I think it behooves every professional organization that's out there certifying their physicians on continuing medical education to have informatics and AI type conversations and training for those professions. So it impacts them specifically from this trusted source. That's understanding what they are. It's more than just understanding the evolution of disease and the new tests and the new medications and those things it's got to be this also.So on that front end that has to be. It has to be colleagues that are knowledgeable Cmios and others like myself, also being talking to folks so like this on on health chatter to give them the the viewpoint cause I'm steeped in it every day. So it's gotta be multimodal in. It's it's approach. It's gotta be. We gotta blanket that across the board. Now, as we move down the spectrum from season to younger individuals who have their say, their Md. There's an advantage to helping them with things with AI, but they have to know. AI is helping them in a way. Yeah, that is different. So the thing I tell people from a really hard design perspective is, how do I let you know the validity of the thing I'm suggesting to you? Or how do I let you know this was created by AI. How do I give you indicators? So you realize this is not another human. This is not something that was already there. It was something that was created out of thin air, so to speak. And that's not entirely true. But the knowledge and the exposure from a usability user experience. Perspective has to be there kind of in the workflow as well. Then, as you think, back into medical education. We still haven't cracked the not on basic informatics, education from a medical education perspective. I as a pathologist, it's part of pathology. Pathology was the first set of professionals along with radiology that were using computers because of volume because of those technologies we had to be there. So I was. I was taught that in in my training, and I actually in, you know, training residents and pathology. That was my job as well. And I was the informatics, Guy. I was teaching them about informatics. So we have to get back all the way into medical school and say, Okay, here's your informatics. Course, becausee medicine is information. Right? Treatment of patients is information. It has to be more than information has to turn. It has to go from data to information, to knowledge in a way thats clear and and and open and clear cut to to that person that's learning it. And and they gotta have the underpinning. Now, going back even further into, you know, before college before college and then into high school, and before it needs to be baked in there, too. Right? We do have this thing in in America, where people don't like stem and steam, right? Stem is core to this. You have to have a basic understanding of that way back when. So we have to push it all the way back to the very early years in. I was in an airport last night, and then invariably, you're walking through the airport, and people are stuck on their phones. and they're consumed by it. But they also, many of them are quite sophisticated and understand the technology. Many don't. And so it goes all the way back to that. So it's a, it's a societal problem. It's not just professional. It's not just educational. It's the whole thing that we need to to keep front and center.
Stanton Shanedling: You know, it's interesting, you know. My wife and I have. You know we've lost both of our parents. But you know I catch myself from time to time saying, Oh, my God! If my mom or dad were alive today, this idea of simply streaming a television show would be frankly like a foreign languag at their age. It's just. And so now you think about all these things, you know at 1 point, think about it even for us, just a computer, just a mirror computer, and but this is coming at us very, very quickly. Clarence.
clarence jones: Yeah, I think that this is a this is a great, great opportunity for us. Ii wanna go back to the question about the concern about AI, okay. And how do we help people to understand? It's gonna happen. Okay? So people just don't see it. How do we? How do we help people to understand the importance of it? And also how they could utilize it or effectively for themselves?
Jigar S Patel, MD (Oracle): Yeah, It's it's gotta be a societal goal to inform more on it. Holistically, I think. That is a it gets back to the gotta educate on stem right? And understanding the technology not just taking it at face value. Now, with that knowledge and that loss, comes a blindness to what it's doing to you individually. And when we start to accept the inputs without any questions, that's when we may have lost right and lost is probably a strong word here, but it is something that we have to be very, very cognizant of. I mean, there's this I read an article that said at some point 50% or more of the Internet may have been generated by AI. Then it's not even a human created And so the information we get is AI generate, and and that scares the Jesus side of me. Frank. In that it. What becomes truth then what it. It's some human behind the scenes, manipulating potentially in an adverse way the truth? Right? That's out there. And it. And it becomes a non-concept.
Stanton Shanedling: One of the other themes that you alluded to jigar was was the idea of empathy and sympathy that helped me to figure out. I don't know how a machine can do that. Okay, I don't know how artificial intelligence could do that. But as human beings, we can do that. Okay, so do we use artificial intelligence, then, to help us as clinicians, as public health people to be more empathetic, to be more sympathetic, we use that a as a professional tool?
Jigar S Patel, MD (Oracle): Yeah, the the the concept around a large language model. Is it depending on how you've trained on what corpora of text you've loaded into it? That corpora of text understands the relationship of words to one another and when you say to it that you want it to act more like, say, a specific author or a specific somebody that does a good job of conveying empathy through words. then it can take on that characteristic. So it is. It's all about the language in the use of language and the right language in how those relate to one another that it can do better than a human because it has billions, trillions of words, and the relationship of those words to one another, and examples of different things, and the probabilities of those things right? So we can understand how the basic concept of language can be more empathetic or less.
Stanton Shanedling: It could be a foreign language, and AI, can connect to the empathetic words for that different language.
Jigar S Patel, MD (Oracle): And it's not just the words. It's the construct of the words in relation to to one another from a large language, perspective, large language, model perspective, the underpinning of generative AI the vast majority are in English right now. So we have a translation problem that has to get solved over time. The Internet is the default language of the Internet is English. And there's a lot of go to your Google Translate. And it translates into any language right? So there is a loss of that in translation. But AI will catch up there as well. So it's it's saying the right words in the right order at the right time that conveys that that empathy and sympathy in a way that's unique and different. It can be programmed.
clarence jones: Gpt, what is it? let me let me ask this question. II have only heard stories about this. What is this chat?
Jigar S Patel, MD (Oracle): So chat, Gpt, is the foundation is a company called Openai. Openai has loaded huge corpora of text. Internet-based text. the biggest sources being Wikipedia, Github, etc. So taking the world's knowledge and basically understanding the relationship, the words, the it's a large language model in that. Now it can predict based on that corpora of text the next word given any of the words before it. So that's the underpinning of this. Now you put a transformer on top of it in a chat interface to interpret the input and then predict or create out of that understanding a response to the input, the chat. right? So that is a the concept of a chat. We're we're used to a search right? And a chat is the next evolution of that in some form. In some ways. We're used to it now on the Internet, right? When we go to customer service. The first thing you hit is the chat box. Right? Same thing. It's taking the input from a type or words perspective understanding it and then turning it back into something useful for you that that evolution has gotten to chat Gp, and that that thing and its capability to do things well beyond that simple interaction. So it's really it's probabilistic. It's it's understanding the likelihood of these things to one another and then programming to accomplish. The end points that Openai is one large language model. Google has a number of them that because of the text it's loaded has different probabilities And then other companies have other open sources and other things. They've loaded into various large language models that interact differently because of those different probabilities in these different corpora of text. As an example, if you loaded the national libraries of medicine content into something else, it's gonna be very different than it is. Looking at Wikipedia right? It's not gonna know about Napoleon or you know things around Napoleon or the context doing a polling or those sorts of things. But it will know about all bladder disease and other things in a more complete way than than, say, a, a, a general purpose, large language model. So it's also going to depend on those things interesting. Meta, Facebook has its own large language model based on Facebook. Right? So it's using these very different corpora of text to create and then layer on top of other technologies to transform those things, to take an input and provide an output.
Stanton Shanedling: So let me ask you. I'm trying to circle us back into the health arena here in just a second. But one of the things that kind of maybe disturbs me on the front end a little bit is, are we compromising human intellect? So let me let me give you a for instance. It's like, if I wanted to give a speech. Okay on. you know, whatever. Okay, to whoever. Theoretically I could, you know I could, you know, look it up and have it created for me, you know, and you know I might change, you know a vod to an up, and off I go alright. So are we. What's your sense, I mean, you've been in the field. So do you think we're compromising human intellect? Or what is it that you could easily say we're complimenting that right? That would be hopeful. But, on the other hand, are we? Are we? Are we truly compromising our intellect?
Jigar S Patel, MD (Oracle): I don't know if compromising is the right word, are we? Are we making it easier to pers to be perceived as intellectual? you know, as people trained in medicine. We took a lot of time in our careers to learn to synthesize absolutely made the act of synthesis almost it. It's very. It doesn't have to be complicated. It can be a simple input and out spits. This 500 word essay on the thing you want, or the speech or whatever the story telling that goes with that is a synthesis act right. It's correlating personal experiences and and things that you think might be relevant to the topic that drive a compelling speaker. But you can shortcut you absolutely can with these things, and you can make someone who's very uninformed isn't the right word but who's put no work effort into it? And then they can regurgitate. Now is that person gonna be on stage. Somebody that's gonna be as compelling is somebody who synthesized it and can tell it accurately. They're just reading queue cards at that point. It it won't be as compelling people will not necessarily be drawn to that cause. There is that human element. Now if can I are there other examples of people creating avatars that are as compelling potentially. Yes, so it it can be very shortcutting to the. human existence, right? And the knowledge and the thoughtfulness of our race that has taken millennia to create could it be? Yeah, it. It is a real. It's a fear.
Stanton Shanedling: Yeah, yeah, yeah. Alright. So let's let's circle back. You know, Clarence and I have been involved in the healthcare arena for a long, long time. in your experience, are there particular? Let's just take chronic disease, arenas. I did, you know from your perspective. Do you think that particular chronic disease arenas that can really utilize AI, I guess much more? than per, perhaps others. Or are we all out in the same playing field right now, with no matter what condition.
Jigar S Patel, MD (Oracle): Yeah, I think there, there, I think there's 2 aspects to that. One is the the longevity or the how long something has been around. And then, secondarily, the new knowledge sources that go to inform those things. So if we take some of that's had a chronic condition for 30 years. The summarization of that course over 30 years would take an hour of digging through a chart to figure out and piece together. That thing. Yeah, I can do that in a way in instantaneously, almost right, that a human cannot. And some things can figure out that a human may have missed because it took them an hour as opposed to. It's generated a page for me to read and consume, and there are correlations in it that may become more clear in that pros now. It can also then say there are correlations here that are missed differently. So promote. Well, it's time-saving. But it's also what's the length of time the chronic conditions have been around now as a pathologist in. Even in my time, since my training, our knowledge of the genetics, the markers, and other things around cancer, specifically in other pathological conditions that are new and we can use AI to look back on those things and make correlations as well. Now, when you think about, how do you then, incorporate a whole genome to the condition that is a chronic condition as well. It doesn't have to be linear, or this snippet means that thing. It could be. This constellation of snippets means this thing. AI and and machine learning in general is good at finding patterns and so those patterns that may have eluded a human in this large volume of information about this individual can be made easier.
Stanton Shanedling: So it's a function of the of like you said. If there's more history for the actual disease itself. Or there's more history of a patient having a particular condition over extended period time. AI can be an incredibly useful tool to synthesize the information quickly and efficiently
Jigar S Patel, MD (Oracle): family history. Also, we're going to be in an era where many of our records have been digitized. Now our kids records are digitized, and you correlate those things together at a human might not do. But in artificial intelligence could do. Yeah, so let me let me you know, we've all lived
Stanton Shanedling: and frankly are still living with covid okay, alright, so take Covid as a public health issue. Alright. Tell me how. Perhaps AI could have been more of a useful tool for us if we had, if we had maybe utilized it more or engaged with it more in order to respond in this case to a public health emergency.
Jigar S Patel, MD (Oracle): Yeah, I mean some we can. There's been studies that have shown Google searches predict epidemics for seasonal seasonality of flu or those so looking at various data streams and correlating them together in a way that is forward. Looking to say. is this an anomalous behavior to the normal state and correlating more varied, more varied constellation of symptoms and grouping of symptoms that says, Wait, this might be unique. That can be done more readily now public health infrastructure in general. I think people would largely agree, needs an uplift. We're. It's not. It's behind many other industries in how we think about data acquisition data sharing. There's state and local federal restrictions and all those problems that come with the data that you want to have. That we have to battle past, but a a absolutely the capability of AI to look at large data streams and say, Wait a minute. Where are the patterns in here? Could help so it could be for predictability good. And then that could lead to time savings in an action perspective on the drug discovery side. There's already been. nor some frightening examples of drug discovery being done through AI right and creating compounds that are novel and have different properties that could be potentially brought to bear sooner. So it doesn't take a human chemist to really sort through those things, understand those things. So there's there's implications from a public health treatment perspective also. On that and things and things going forward. Yeah, Clarence.
clarence jones: So, Dr. Patel, how quickly is AI going to grow? I mean, you know. I mean, it seems like it, it's been growing at a pretty quick rate. And so, even for those people who are resistant to AI, it appears that is going to overtake this pretty soon. How rapidly is growing on a yearly basis, or what do you? What do you? What are some of your projections for the year 2025, 2030.
Jigar S Patel, MD (Oracle): Yeah, I don't have any numbers for clearance. I'll be honest with you. But I mean it. It. It probably correlates to Moore's law which is computing power will increase, will double every 2 years or so every year. Something I don't remember the exact, but the capability of computing to accelerate and get faster and more performant. Ai is gonna have a similar kind of hockey stick to that. Moore's law in that. The computing is going to push it in that direction now. I think if we look back, the awareness of it isn't is we're at hockey stick moment, right? And that was with ChatGPT I think 3, 4, and the acknowledgement that this thing does amazing things, more so than say, Watson in the past, or others in the past. Right? I think the awareness will continue to grow very, very fast now. The use, I think, will also accelerate. I think big companies such as my own and others are. How do I put it? Everywhere? It is the surfacing to humans is going to get accelerated. And I think the goal of embedding it in computer systems and human interactions is that we can some, we can potentially that we in as human race. We've always talked about, how do we be more productive? And that productivity continues to increase in AI can only help there as well. Now that has implications around people losing jobs and all that. But that's happened throughout history with various technologies. In computers, is just in in AI is just the latest example as well. We forget, we forget, not necessarily forget. But we don't have to do certain things because we have this new technology. Car is an example. We don't have to walk now, right? We bike those things, have accelerated our productivity and getting from one place to another making the world smaller as an example of how this could change us going forward as well. So the timescales of things are going to be very different. I don't have good statistics to support any of that. But I it feels like it's accelerating at a breakneck pace.
clarence jones: Let me ask this question follow real quick. With all this break, break, break, met technology. It also increases the possibility of scams of people being tricked. Those kinds of things. And so my question again to you is like, what are some of the things that we should be watching out for, or thinking about as we are embracing or engaging with this new technology.
Jigar S Patel, MD (Oracle): we have to be increasingly skeptical of everything in one way or another. In my viewpoint of the world. Right? The any technology will be used for ill, and it'll be gotten means to crime and scans and those sorts of things. It's that's been true of every technology since the beginning of time. Also, right? Someone will use it for something that is not what you know, societally accepted. In a way that is gonna be there, so it will accelerate. Also it'll become a more dangerous, isn't the right word. But it'll be a more. We'll have to tread more carefully in all of our interactions. To really get to that I mean, even today. I don't answer phone calls. If it's not in my contacts because it's generated by some technology it was pushed to some human to give me a call or a robot or something to have an interaction. And so I'm skeptical about any phone call. I get any incurring text as well. You know things there's slowly but surely people are creeping into all those things, social media. It's it's everywhere. And it will only accelerate in that way. You know, one of the frightening things from a computer security perspective is yeah, I can write code. And it can, yeah, can be carrying to write malicious code. right? That's gonna accelerate, too. So it's gonna be this constant arms race. If we think about cyber security of AI protecting and then harming in a way that's gonna be interesting to see how it evolves over time, too. So it's it's this it is. It's an arms race is the best way to describe it. Right? The good forces of good and evil that sound like a superhero movie, right? Are gonna be in in constant conflict going forward.
357
00:48:15.333 --> 00:48:18.213
Stanton Shanedling: So let me ask this, as in the news. I was just reading a day or so ago. You know President Biden is looking at, you know I and I'm reflecting on the politics of it all now. It's looking at some kind of potential
federal legislation, I guess. For us to better be protected at least theoretically, around. AI. So let me get your thoughts about the I guess politics behind all of this.
Jigar S Patel, MD (Oracle): Oh, wow! I have to land mine. I don't think I want to step on, but anyway, It inevitably will have political implications right? But it goes back to that, that social media thing, too right? And then your belief or disbelief, that the Government is here to help right? But I do fundamentally believe and have just. I did some travel this week, but in on before I traveled I filled up my car. And the one thing that struck me filling up my car was the little badge on the thing that says this was assessed for accuracy. You pick up a box of crackers. There's a nutrition facts on the side. People are like, why don't we have nutrition facts for AI or computers, or things like that. Those are all government-mandated things now. So many of them have been faded away into. I forgot the Government did that. But that sort of thing is is omnipresent in our lives. Right Ehr is in their way from an informatics. Perspective been, have been, have become the arm, the long arm of the Government into clinical practice, right? Because in the United States 50% payment for healthcare still from the Government, so they have an overs over-weighted interest from a budget perspective. Right? So it is. Government will have its hands in it for various reasons, for good, for financial fiduciary responsibility for all of those things, and you will fall on different sides of the spectrum that let's all be liberty and free, or let's all lock it down. And that's true all over the world, you know. One thing I've had that the pleasure of having in my career is seeing how various societies think very differently about healthcare. We're in. The most recent one that is very interesting from a EU perspective, is that Sweden has very, very, very restrictive privacy laws on, on healthcare data, and that example in bringing it to the United States, you can learn from. And you can you? You can learn in both directions, good and bad. Yeah, one of the things that in the United States or way, I think about healthcare is if I see a psychiatrist that can affect their medical health, and I should know about that, and vice versa. But in countries like Sweden and I don't know the details, but they, a patient, could say, I don't want my medical provider to see my psychiatrist stuff so as a provider, I could say, Wait a minute. You've unintentionally harmed your own care through for the sake of privacy that's mandated by law, which is political. So those sorts of things. It's inevitable. And there will be a great debate on the less or more, as has always happened with politics, laws in the the societal approach to being monitored or not monitored. So it's a function of the concept of regulation and how it isn't that this AI should be or shouldn't be, regulated in the gestalt of it all, or maybe, as you go down the funnel as you become more specific? How is it that it might need to be regulated of some sort?
Stan: talk about a complex question. You could probably ask AI that question enough itself. I think it's an inevitability. It will be as we go down. Clarence. Last words.
clarence jones: You know I this is really been interesting. I would love to have another hour with you. I would actually love to have that background you got, you know. I mean those those clouds remind me of our of our world, right above everything is rumbling. But I do thank you. Thank you for this, I think. Answer my questions, and I really believe that I listen to have an opportunity to learn something from there, and to enter into this conversation in a much more informed way.
Jigar S Patel, MD (Oracle): Thank you for having. I hope so. I hope it was informative. Yeah, let me just ask this one last thing. What do you want to tell the public
Stanton Shanedling: I mean about, I mean, what's what's a take away that the health chatter audience here should know like, you know, don't be afraid of it, or you know it's here to stay, but embrace it, or what is it that they need to have
Jigar S Patel, MD (Oracle): learn about it? Learn. know about it, and then that can. Then you can form your own approach to it, own understanding of it. Without that basic understanding and the implications of it. Ii think that would be. My. My biggest advice is is, learn about it. You have to learn.
Stanton Shanedling: Well, I so greatly appreciate your insights. I learned a lot just listening to you, and in in hearing your AI responses. So it's still human. Yes, it absolutely is so. So thank you very much so far. Listening. Audience, keep help chatting away. Our next show will be on ready spirituality and health, which will also be an into very interesting subject, so everybody so long.