of doctors are using ChatGPT without notifying patients

of doctors are using ChatGPT without notifying patients

Several US organizations are testing OpenAI chatbots to provide a first response to specific patients. A testing phase whose outlines are very vague.

It was expected that more and more sectors would enter into AI, especially ChatGPT. It must be said that the possibilities offered by the OpenAI chatbot seem to be unlimited and it saves a lot of time in many activities.

In the US, many players in the health field are currently testing ChatGPT to respond to specific patients. An experiment that is sometimes performed without the consent or consent of the patient.

ChatGPT to the rescue of telemedicine

As the Wall Street Journal saysSeveral organizations are currently testing the use of ChatGPT by integrating it into the MyChart platform, a tool designed for online dialogue with the US medical profession.

To respond more quickly and efficiently to the influx of messages generated by the democratization of telemedicine, UC San Diego Health, UW Health, and Stanford Health Care hospitals are participating in this testing phase.

The spontaneous response was validated by the clinician

Thanks to the partnership between Microsoft and Epic behind the development of MyChart, ChatGPT is integrated into the system to allow automatic responses to certain requests. AI can access patients’ medical records and thus use them to formulate a quick and accurate response thanks to its powerful algorithm.

For now, there is still human control as every message produced by the AI ​​must be validated by a doctor before being sent to a patient. This allows for a final check and adjustments if needed.

See also  Seus benefícios são de acordo com a ciência

No approval

The answers provided by the AI ​​are for the most part convincing, because the patients do not notice that a chatbot has been provided to them. And herein lies the problem, because at no point does the user give their consent to the AI ​​to rely on this medical data and receive an “automated” answer to their questions.

Also, this use poses other ethical issues, because if, for the time being, we can imagine that clinicians who participate in this testing phase are meticulous when they re-read AI answers before validating them, that vigilance might wane with the democratization of the tool or by force of habit. As we know, ChatGPT and its colleagues are far from perfect today.

You May Also Like

About the Author: Irene Alves

"Bacon ninja. Guru do álcool. Explorador orgulhoso. Ávido entusiasta da cultura pop."

Leave a Reply

Your email address will not be published. Required fields are marked *