Explore more publications!

AI in the Doctor’s Office: How Standards Can Support Trustworthiness

Ram Sriram poses smiling for a head shot outdoors on the NIST campus.

Credit: M. King/NIST

When you go to a medical appointment, does the doctor look at you while you talk? Or are they busy typing everything you say into a computer? If it’s the latter, you may find it will change soon, thanks to artificial intelligence (AI).

Some doctors’ offices are using AI transcription services to transcribe your discussion with the doctor and automatically enter the results into your electronic medical records.

That’s a time-saver for doctors, who often spend hours filling out their patients’ records. It also allows them to look at the patient rather than their computer screen.

You may have also noticed AI chatbots asking if they can help you when visiting a company’s website. These chatbots are not as common in health care yet, but it’s possible they could assist you with basic medical questions in the future. This could free up the doctor’s time for more complex concerns.

These are just two ways AI may impact your future health care. But given the high stakes, it must be done with thoughtful standards.

The Need for AI Standards

If AI will work in the medical field (or any other field it's used in), we need to develop specific and useful standards. These will need to include characteristics that can be used to judge an AI model on its reliability and trustworthiness.

One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient need to know the odds that the AI is right or wrong.

Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously. In many AI systems, which use large datasets to learn, people can introduce Trojans, similar to computer viruses. This can alter the AI system’s reasoning. This can be done at the level of the input (dataset), the model (the thinking) or the AI’s environment and how it interacts with the world.

For example, researchers introduced a Trojan by placing a sticker on a stop sign. This Trojan made the self-driving car run through a stop sign because it thought it was a speed limit sign. So, there are dangers we’ll have to face if AI is unreliable. My NIST colleagues (Walid Keyrouz, Timothy Blattner and Michael Majurski) are doing considerable work to help detect Trojans, which I hope will make AI more reliable.

NIST researcher Ram D. Sriram wants to see technology make health care more available to more people.

Credit: M. King/NIST

Why AI Standards Matter

Standards will be critical to evaluating AI tools as they become more commonly used. Our research at NIST will help influence voluntary standards in this field, which will help the U.S. lead the world in AI, especially in health care and medicine.

Some people have an unfair and inaccurate perception that standards hinder innovation. In fact, standards are just an agreed-upon set of rules that encourage it.

One of my favorite examples of this is in music.

Thousands of years ago, music was not written down. But in the 11th century, a monk named Guido d’Arezzo developed an early system for music notation. Music notation has since become a standard that allows us to play and sing other people’s music from across the world or from earlier times. You can perform any musician's work as long as you know how to read music.

I’d argue that we’ve had a lot of innovation in music since the 11th century, thanks in large part to this standard.

AI Can See Stem Cells

I plan to use AI in my own health care at some point in the future. I’m one of the several million people in the U.S. who have the gene mutation for age-related macular degeneration. While it doesn’t cause complete blindness, it can make it harder to see up close, read or drive. Luckily, stem cell implants grown from a patient's own cells offer a promising solution to preserve vision.

However, during the manufacturing process, these living cells undergo multiple transformations, which can create health risks for the patient. AI can examine the quality of the cells in a way that can predict which cells will work best for the patient.

My NIST colleague, Peter Bajcsy, and his collaborators at NIST and NIH have made significant contributions to this technology. Their work has led to an FDA-approved treatment for age-related macular degeneration, which has already shown success in patients.

It's reassuring to know that this technology is available should I need it to protect my vision in the future. So, helping ensure that AI works optimally could help millions of people, including myself.

Future of AI in Health Care

Many of my family members — including my wife and brother — are doctors. My family is so steeped in the medical field that Sir Alexander Fleming, who discovered penicillin, was a friend of my grandfather’s. Fleming used to have tea with my grandfather every day during his visit to Madras, India.

So, I want to see AI help doctors, not replace them. In this case, I think of AI as “augmented” intelligence, not artificial intelligence. I’d love to see AI get to the point that it can help doctors in as many different areas of health care as possible.

I’m so passionate about this because I want to see technology make health care more available to more people.

Remember the example at the beginning of this post? I hope that if doctors can use AI as their notetakers, in addition to helping with decision-making, they’ll be able to spend less time on paperwork and more time seeing patients.

A Framework for Managing AI Risk

Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions