As dystopian as it sounds, artificial intelligence (AI) already plays a big role in our daily lives. It’s present and working, even when we don’t realize it. Do you have autocorrect enabled on your phone? Is the little green Grammarly icon floating at the bottom of your screen? Did Gmail recently remind you to include an attachment on the email you were about to send? Welcome to the future.
It doesn’t stop at productivity and computing. AI is slowly seeping into other industries as well, including healthcare. And while it’s scary to think about patients searching for diagnoses through ChatGPT, it may actually be a good tool for us to use to reach them. Here’s why:
It can help create approachable content for patients
When creating patient-focused content, we should avoid flowery, complex prose. Directness is a plus when talking about symptoms and diagnoses. According to the U.S. Census Bureau, about 79% of internet users skim new web pages they come across. Using direct, plain language helps readers get to the point faster and reduces the need for clarification.
AI tools can help you revise your content so it’s easy to read. Grammarly points out unnecessary words and passive voice—both elements of writing that can make sentences confusing for readers. And tools like Hemingway Editor, which measures your Flesch Reading Ease score and suggests ways to make your content easier to read, can identify passive voice, complex terms and sentences that are too long.
It can help patients communicate better with their doctors
While it’s important to use plain language to improve readability, it does not mean you should be imprecise or “dumb down” medical concepts. Patients should learn key terms from your content to help them understand the concept, engage with the information and take action based on their understanding. In other words, explaining accurate medical terms makes it easier for patients to communicate effectively with their new and existing healthcare providers.
AI tools can suggest alternative terms for medical jargon (like replacing “hypertension” with “high blood pressure”). AI tools can also help you make the content more scannable, direct and conversational depending on your audience.
It can help you write more efficiently
In a world where news moves at lightning speed, AI tools can give you a leg up on creating content that can be published quickly—whether it’s an announcement from the CDC or a health trend that needs debunking. ChatGPT does make a speedy writer.
The caveat: You still need to fact-check everything that gets published. These platforms are designed to learn from existing content and predict what you want to write, but that doesn’t always mean they come up with accurate information. In 2022, a national poll found that 44% of physicians reported half of the information about COVID-19 they saw, read and heard from patients was inaccurate. And with widespread so-called “hallucinations,” generative AI tools are wont to contribute to that misinformation.
How to work with AI in a safe and productive way
Technology is constantly evolving, and we’re all trying to catch up day-to-day. It’s important to stay on top of new developments in AI, but it’s also important to make sure you’re aware of its limitations and dangers.
In terms of privacy, never enter patient data into an AI platform—these platforms are definitely not HIPAA compliant, and legal ramifications for how they store and use data are still developing.
You should fact-check any content that an AI platform generates or edits for you. Some tools, like Perplexity, provide sources for every response given, which can be helpful for efficient fact-checking instead of turning to Google.
Generative AI tools are just that—tools. They’re not going to replace human writers anytime soon. These tools don’t have original thoughts or opinions, they can be dangerously inaccurate and they can be prone to plagiarism if you’re not careful. But they are helpful for a myriad of reasons, and they can make it easier to connect with our patient populations and communities.