Article Type
Changed
Wed, 05/24/2023 - 12:05

Since the November 2022 release of a much-discussed artificial intelligence (AI)-based chatbot, I have been curious what all the buzz is about. I decided to engage my well-connected software-savvy son-in-law to hear where he thought things were going.

He started by suggesting that I pose a question to the chatbot about something of which I had some current knowledge. I had recently researched the concept of primal beliefs and so we asked the chatbot to write a short essay about when an individual develops his/her primal beliefs.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

In a matter of seconds the “machine” spit out a very readable document that included all the information that had taken me several hours to unearth and digest. And ... it included the references that I had determined to be valid and appropriate. It was an impressive performance to say the least.

Obviously, a technological development with this capability is sending tremors through the educational establishment. One can easily think of several human skills that an AI like this might eventually make superfluous. It will also make it increasingly difficult for educators to determine a students’ true abilities – research, synthesis, and writing to name just a few. But, of course, one could question whether we will need to teach and then test for these skills that the chatbot can perform more quickly. I’m going to leave it to the educators to struggle with that question.

In the long term you and I may find that AI is a serious threat to our existence as health care providers. In the meantime I’ve decided to focus on how we in primary care can take advantage of the wonders of the current AI technology.

My first thought is that if I were having trouble arriving at a diagnosis, I might appreciate having a chatbot to ask for help. Of course this would require that I had already taken a history, done a good exam, and ordered some obvious lab and imaging studies. It would also mean that I had decent knowledge and understanding of basic pathophysiology and was capable of thinking broadly enough to ask a question that would give me the greatest chance of getting the correct answer.

Knowing how to ask the right question is a skill that can be taught. For example, my wife is a successful and experienced online shopper but she acknowledges that when we have medical questions, I can often find the answer more quickly than she can. My relative success usually hinges on my choice of the key word(s) to begin the search, clearly the result of my medical training.

Once I have received a list of possible diagnoses from the chatbot, I must then be able to evaluate the validity and applicability of the references it has supplied. That too is a skill that can be taught. And, for the moment the critical importance of having these two skills suggests that graduating from medical school will continue to give us some job security in the face of expanding AI.

The same process I could used to coax the chatbot to arrive at a diagnosis could be applied when faced with a therapeutic question. Is surgery better than a pharmacological approach? If I need help with a dosage regimen, I could find this information online now. But, wouldn’t it be quicker and maybe better if I asked the chatbot to do the research for me and print a short essay on the pros and cons of different management approaches?

Once I’ve made the diagnosis, crafted a management plan, and now want to hand the patient a document in his/her primary language and at his/her reading skill level describing the diagnosis and giving detailed instructions to follow, this would seem to be a piece of cake for a chatbot given the appropriate commands. Hopefully I would remember to include the disclaimer that “This document was created with the help of a chatbot.”

Clearly, there is nothing to prevent our patients from asking the chatbot the same questions I have posed. And, no doubt, this will happen. It is already happening in a more cumbersome fashion when patients research their own symptoms. However, in the short term I believe we will retain the upper hand.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Topics
Sections

Since the November 2022 release of a much-discussed artificial intelligence (AI)-based chatbot, I have been curious what all the buzz is about. I decided to engage my well-connected software-savvy son-in-law to hear where he thought things were going.

He started by suggesting that I pose a question to the chatbot about something of which I had some current knowledge. I had recently researched the concept of primal beliefs and so we asked the chatbot to write a short essay about when an individual develops his/her primal beliefs.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

In a matter of seconds the “machine” spit out a very readable document that included all the information that had taken me several hours to unearth and digest. And ... it included the references that I had determined to be valid and appropriate. It was an impressive performance to say the least.

Obviously, a technological development with this capability is sending tremors through the educational establishment. One can easily think of several human skills that an AI like this might eventually make superfluous. It will also make it increasingly difficult for educators to determine a students’ true abilities – research, synthesis, and writing to name just a few. But, of course, one could question whether we will need to teach and then test for these skills that the chatbot can perform more quickly. I’m going to leave it to the educators to struggle with that question.

In the long term you and I may find that AI is a serious threat to our existence as health care providers. In the meantime I’ve decided to focus on how we in primary care can take advantage of the wonders of the current AI technology.

My first thought is that if I were having trouble arriving at a diagnosis, I might appreciate having a chatbot to ask for help. Of course this would require that I had already taken a history, done a good exam, and ordered some obvious lab and imaging studies. It would also mean that I had decent knowledge and understanding of basic pathophysiology and was capable of thinking broadly enough to ask a question that would give me the greatest chance of getting the correct answer.

Knowing how to ask the right question is a skill that can be taught. For example, my wife is a successful and experienced online shopper but she acknowledges that when we have medical questions, I can often find the answer more quickly than she can. My relative success usually hinges on my choice of the key word(s) to begin the search, clearly the result of my medical training.

Once I have received a list of possible diagnoses from the chatbot, I must then be able to evaluate the validity and applicability of the references it has supplied. That too is a skill that can be taught. And, for the moment the critical importance of having these two skills suggests that graduating from medical school will continue to give us some job security in the face of expanding AI.

The same process I could used to coax the chatbot to arrive at a diagnosis could be applied when faced with a therapeutic question. Is surgery better than a pharmacological approach? If I need help with a dosage regimen, I could find this information online now. But, wouldn’t it be quicker and maybe better if I asked the chatbot to do the research for me and print a short essay on the pros and cons of different management approaches?

Once I’ve made the diagnosis, crafted a management plan, and now want to hand the patient a document in his/her primary language and at his/her reading skill level describing the diagnosis and giving detailed instructions to follow, this would seem to be a piece of cake for a chatbot given the appropriate commands. Hopefully I would remember to include the disclaimer that “This document was created with the help of a chatbot.”

Clearly, there is nothing to prevent our patients from asking the chatbot the same questions I have posed. And, no doubt, this will happen. It is already happening in a more cumbersome fashion when patients research their own symptoms. However, in the short term I believe we will retain the upper hand.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Since the November 2022 release of a much-discussed artificial intelligence (AI)-based chatbot, I have been curious what all the buzz is about. I decided to engage my well-connected software-savvy son-in-law to hear where he thought things were going.

He started by suggesting that I pose a question to the chatbot about something of which I had some current knowledge. I had recently researched the concept of primal beliefs and so we asked the chatbot to write a short essay about when an individual develops his/her primal beliefs.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

In a matter of seconds the “machine” spit out a very readable document that included all the information that had taken me several hours to unearth and digest. And ... it included the references that I had determined to be valid and appropriate. It was an impressive performance to say the least.

Obviously, a technological development with this capability is sending tremors through the educational establishment. One can easily think of several human skills that an AI like this might eventually make superfluous. It will also make it increasingly difficult for educators to determine a students’ true abilities – research, synthesis, and writing to name just a few. But, of course, one could question whether we will need to teach and then test for these skills that the chatbot can perform more quickly. I’m going to leave it to the educators to struggle with that question.

In the long term you and I may find that AI is a serious threat to our existence as health care providers. In the meantime I’ve decided to focus on how we in primary care can take advantage of the wonders of the current AI technology.

My first thought is that if I were having trouble arriving at a diagnosis, I might appreciate having a chatbot to ask for help. Of course this would require that I had already taken a history, done a good exam, and ordered some obvious lab and imaging studies. It would also mean that I had decent knowledge and understanding of basic pathophysiology and was capable of thinking broadly enough to ask a question that would give me the greatest chance of getting the correct answer.

Knowing how to ask the right question is a skill that can be taught. For example, my wife is a successful and experienced online shopper but she acknowledges that when we have medical questions, I can often find the answer more quickly than she can. My relative success usually hinges on my choice of the key word(s) to begin the search, clearly the result of my medical training.

Once I have received a list of possible diagnoses from the chatbot, I must then be able to evaluate the validity and applicability of the references it has supplied. That too is a skill that can be taught. And, for the moment the critical importance of having these two skills suggests that graduating from medical school will continue to give us some job security in the face of expanding AI.

The same process I could used to coax the chatbot to arrive at a diagnosis could be applied when faced with a therapeutic question. Is surgery better than a pharmacological approach? If I need help with a dosage regimen, I could find this information online now. But, wouldn’t it be quicker and maybe better if I asked the chatbot to do the research for me and print a short essay on the pros and cons of different management approaches?

Once I’ve made the diagnosis, crafted a management plan, and now want to hand the patient a document in his/her primary language and at his/her reading skill level describing the diagnosis and giving detailed instructions to follow, this would seem to be a piece of cake for a chatbot given the appropriate commands. Hopefully I would remember to include the disclaimer that “This document was created with the help of a chatbot.”

Clearly, there is nothing to prevent our patients from asking the chatbot the same questions I have posed. And, no doubt, this will happen. It is already happening in a more cumbersome fashion when patients research their own symptoms. However, in the short term I believe we will retain the upper hand.

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article