The Office of Personnel Management 65 insurance companies to provide monthly reports with detailed medical and pharmaceutical claims data of more than 8 million people enrolled in federal health plans, ºÚÁϳԹÏÍø News reported earlier this month. The request, which could dramatically expand the personally identifiable medical information OPM can access, alarmed health ethicists, insurance company executives, and privacy advocates.
Now, OPM Director Scott Kupor has two letters on his desk — one from 16 U.S. senators and another led by Rep. Robert Garcia, the top Democrat on the House Oversight Committee — asking him to drop the agency’s proposal.
“The collection of broad, personally identifiable data regarding medical care and treatment raises concerns that OPM could target certain federal employees seeking vital health care services that the Administration disagrees with on political grounds,” the Democratic House members , citing ºÚÁϳԹÏÍø News.
The letters from congressional Democrats alone are unlikely to reverse OPM’s plans. Republicans — who control Congress and, ultimately, any oversight activities — have not weighed in on OPM’s notice.
OPM did not immediately respond to a request for comment on the letters. The agency, which said in its notice that it will use the data for oversight and to manage the federal health plans, has not publicly addressed written concerns about its proposal.
The notice, posted and sent to insurers in December, states that insurers are legally permitted to disclose “protected health information” to OPM and does not provide instructions to redact identifying information, such as names or diagnoses, from the claims.
That data could be used to implement cost-saving measures, health policy experts told ºÚÁϳԹÏÍø News earlier this month. But it would also give the Trump administration — which has laid off or fired tens of thousands of federal workers — access to a vast trove of personal information.
In the letters, Democratic lawmakers lay out a number of concerns about potential consequences of OPM’s obtaining detailed medical claims for millions of federal workers.
The — led by Adam Schiff of California and Mark Warner of Virginia — argues that OPM is not equipped to safeguard such sensitive data and that the administration could share the records across government agencies, as it has done with personal information on millions of Medicaid enrollees.
They also assert that the agency does not have a legal right to the data and that insurers’ sharing the information with OPM would “violate the core principles of the Health Insurance Portability and Accountability Act.” HIPAA requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent. The proposal, the senators warn, threatens patients’ relationships with their clinicians, especially “sensitive disclosures regarding mental health, chronic illness, or other deeply personal conditions.”
“For these reasons, we strongly urge you to cease any further consideration of this proposal,” states the letter, which was sent to Kupor on April 19.
The American Federation of Government Employees, the largest union for federal employees, to ºÚÁϳԹÏÍø News’ reporting. The union noted in a statement from its national president, Everett Kelley, that OPM’s proposal “comes in the context of coordinated attacks on federal employees and repeated stretching of the legal boundaries for sharing sensitive personal data across government agencies.
“The question of what this administration intends to do with eight million Americans’ most private health information is not academic,” the AFGE statement read. “It is urgent.”
In an emailed statement, Kelley applauded the congressional letters.
“We are pleased that Democratic lawmakers on the Hill are just as outraged as we are over this administration’s blatant attempt to breach the privacy of millions of Americans across the country,” Kelley wrote. “We share their concerns regarding potential misuse of the information to continue illegally targeting workers and their demand for OPM to withdraw this proposal.”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/opm-federal-workers-health-records-hipaa-democratic-letters/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2228955&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>LISTEN: Quashing innovation or risking a patient’s health? Lauren Sausser told WAMU’s Health Hub on April 15 why the White House and some states are at odds over how to regulate AI in health care.
Speed, efficiency, and lower costs. Those are the traits artificial intelligence supporters celebrate. But the same qualities worry physicians who fear the technology could lead to insurance denials with humans left out of the loop.
With scant federal regulation, states are left to shape the rules on AI in health care. For residents in the Washington, D.C., metropolitan area, a divide is playing out on opposite sides of the Potomac River. Maryland and Virginia have taken very different approaches to regulating AI in health insurance.
ºÚÁϳԹÏÍø News correspondent Lauren Sausser joined WAMU’s Health Hub on April 15 to explain why where you live may determine how much of a role AI plays in your coverage.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/wamu-health-hub-ai-state-regulation-april-15-2026/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2228242&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>If you or someone you know may be experiencing a mental health crisis, contact the 988 Suicide & Crisis Lifeline by dialing or texting “988.”
Vince Lahey of Carefree, Arizona, embraces chatbots. From Big Tech products to “shady” ones, they offer “someone that I could share more secrets with than my therapist.”
He especially likes the apps for feedback and support, even though sometimes they berate him or lead him to fight with his ex-wife. “I feel more inclined to share more,” Lahey said. “I don’t care about their perception of me.”
There are a lot of people like Lahey.
Demand for mental health care has grown. Self-reported poor mental health days rose by 25% since the 1990s, analyzing survey data. According to the Centers for Disease Control and Prevention, suicide rates in 2022 that hadn’t been seen in nearly 80 years.
There are many patients who find a nonhuman therapist, powered by artificial intelligence, highly appealing — more appealing than a human with a reclining couch and stern manner. with begging for a therapist who’s “not on the clock,” who’s less judgmental, or who’s just less expensive.
Most people who need care don’t get it, said Tom Insel, former head of the National Institute of Mental Health, citing his former agency’s research. Of those who do, 40% receive “minimally acceptable care.”
“There’s a massive need for high-quality therapy,” he said. “We’re in a world in which the status quo is really crappy, to use a scientific term.”
Insel said engineers from OpenAI told him last fall that about 5% to 10% of the company’s then-roughly 800 million-strong user base rely on ChatGPT for mental health support.
Polling suggests these AI chatbots may be even more popular among young adults. A KFF poll found about 3 in 10 respondents ages 18 to 29 for mental or emotional health advice in the past year. Uninsured adults were about twice as likely as insured adults to report using AI tools. And nearly 60% of adult respondents who used a chatbot for mental health didn’t follow up with a flesh-and-blood professional.
The App Will Put You on the Couch
A burgeoning industry of apps offers AI therapists with human-like, often unrealistically attractive avatars serving as a sounding board for those experiencing anxiety, depression, and other conditions.
ºÚÁϳԹÏÍø News identified some 45 AI therapy apps in Apple’s App Store in March. While many charge steep prices for their services — one listed an annual plan for $690 — they’re still generally cheaper than talk therapy, which can cost hundreds of dollars an hour without insurance coverage.
On the App Store, “therapy” is often used as a marketing term, with small print noting the apps cannot diagnose or treat disease. One app, branded as OhSofia! AI Therapy Chat, had downloads in the six figures, said OhSofia! founder Anton Ilin in December.
“People are looking for therapy,” Ilin said. On one hand, the product’s name ; on the other, it warns in that it “does not provide medical advice, diagnosis, treatment, or crisis intervention and is not a substitute for professional healthcare services.” Executives don’t think that’s confusing, since there are disclaimers in the app.
The apps promise big results without backup. its users “immediate help during panic attacks.” it was “proven effective by researchers” and that it offers 2.3 times faster relief for anxiety and stress. (It doesn’t say what it’s faster than.)
There are few legislative or regulatory guardrails around how developers refer to their products — or even whether the products are safe or effective, said Vaile Wright, senior director of the office of health care innovation at the American Psychological Association. Even federal patient privacy protections don’t apply, she said.
“Therapy is not a legally protected term,” Wright said. “So, basically, anybody can say that they give therapy.”
Many of the apps “overrepresent themselves,” said John Torous, a psychiatrist and clinical informaticist at Beth Israel Deaconess Medical Center. “Deceiving people that they have received treatment when they really have not has many negative consequences,” including delaying actual care, he said.
States such as Nevada, Illinois, and California are trying to sort out the regulatory disarray, enacting laws forbidding apps from describing their chatbots as AI therapists.
“It’s a profession. People go to school. They get licensed to do it,” said Jovan Jackson, a Nevada legislator, who co-authored an enacted bill banning apps from referring to themselves as mental health professionals.
Underlying the hype, outside researchers and company representatives themselves have told the FDA and Congress that there’s little evidence supporting the efficacy of these products. What studies there are — and some companion-focused chatbots are “consistently poor” at managing crises.
“When it comes to chatbots, we don’t have any good evidence it works,” said Charlotte Blease, a professor at Sweden’s Uppsala University who specializes in trial design for digital health products.
The lack of “good quality” clinical trials stems from the FDA’s failure to provide recommendations about how to test the products, she said. “FDA is offering no rigorous advice on what the standards should be.”
Department of Health and Human Services spokesperson Emily Hilliard said, in response, that “patient safety is the FDA’s highest priority” and that AI-based products are subject to agency regulations requiring the demonstration of “reasonable assurance of safety and effectiveness before they can be marketed in the U.S.”
The Silver-Tongued Apps
Preston Roche, a psychiatry resident who’s , gets lots of questions about whether AI is a good therapist. After trying ChatGPT himself, he said he was “impressed” initially that it was able to use techniques to help him put negative thoughts “on trial.”
But Roche said after seeing posts on social media discussing people developing psychosis or being encouraged to make harmful decisions, he became disillusioned. The bots, he concluded, are sycophantic.
“When I look globally at the responsibilities of a therapist, it just completely fell on its face,” he said.
This sycophancy — the tendency of apps based on large language models to empathize, flatter, or delude their human conversation partner — is inherent to the app design, experts in digital health say.
“The models were developed to answer a question or prompt that you ask and to give you what you’re looking for,” said Insel, the former NIMH director, “and they’re really good at basically affirming what you feel and providing psychological support, like a good friend.”
That’s not what a good therapist does, though. “The point of psychotherapy is mostly to make you address the things that you have been avoiding,” he said.
While polling suggests many users are satisfied with what they’re getting out of ChatGPT and other apps, there have been about the service or encouragement to self-harm.
And or have been filed against OpenAI after ChatGPT users died by suicide or became hospitalized. In most of those cases, the plaintiffs allege they began using the apps for one purpose — like schoolwork — before confiding in them. These cases are being .
Google and the startup Character.ai — which has been funded by Google and has created “avatars” that adopt specific personas, like athletes, celebrities, study buddies, or therapists — are settling other wrongful-death lawsuits, .
OpenAI’s CEO, Sam Altman, has said up to may talk about suicide on ChatGPT.
“We have seen a problem where people that are in fragile psychiatric situations using a model like 4o can get into a worse one,” Altman said in a public question-and-answer session reported by , referring to a particular model of ChatGPT introduced in 2024. “I don’t think this is the last time we’ll face challenges like this with a model.”
An OpenAI spokesperson did not respond to requests for comment.
The company has said it on safeguards, such as referring users to 988, the national suicide hotline. However, the lawsuits against OpenAI argue existing safeguards aren’t good enough, and some research shows the problems are . OpenAI its own data suggesting the opposite.
OpenAI is , offering, early in one case, a variety of defenses ranging from denying that its product caused self-harm to alleging that the defendant misused the product by inducing it to discuss suicide. It has also said it’s working to .
Smaller apps also rely on OpenAI or other AI models to power their products, executives told ºÚÁϳԹÏÍø News. In interviews, startup founders and other experts said they worry that if a company simply imports those models into its own service, it might duplicate whatever safety flaws exist in the original product.
Data Risks
ºÚÁϳԹÏÍø News’ review of the App Store found listed age protections are minimal: Fifteen of the nearly four dozen apps say they could be downloaded by 4-year-old users; an additional 11 say they could be downloaded by those 12 and up.
Privacy standards are opaque. On the App Store, several apps are described as neither tracking personally identifiable data nor sharing it with advertisers — but on their company websites, privacy policies contained contrary descriptions, discussing the use of such data and their disclosure of information to advertisers, like AdMob.
In response to a request for comment, Apple spokesperson Adam Dema to the company’s App Store policies, which bar apps from using health data for advertising and require them to display information about how they use data in general. Dema did not respond to a request for further comment about how Apple enforces these policies.
Researchers and policy advocates said that sharing psychiatric data with social media firms means patients could be profiled. They could be targeted by dodgy treatment firms or charged different prices for goods based on their health.
ºÚÁϳԹÏÍø News contacted several app makers about these discrepancies; two that responded said their privacy policies had been put together in error and pledged to change them to reflect their stances against advertising. (A third, the team at OhSofia!, said simply that they don’t do advertising, though their app’s notes users “may opt out of marketing communications.”)
One executive told ºÚÁϳԹÏÍø News there’s business pressure to maintain access to the data.
“My general feeling is a subscription model is much, much better than any sort of advertising,” said Tim Rubin, the founder of Wellness AI, adding that he’d change the description in his app’s privacy policy.
One investor advised him not to swear off advertising, he said. “They’re like, essentially, that’s the most valuable thing about having an app like this, that data.”
“I think we’re still at the beginning of what’s going to be a revolution in how people seek psychological support and, even in some cases, therapy,” Insel said. “And my concern is that there’s just no framework for any of this.”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/mental-health/ai-chatbots-therapy-big-risks-few-regulations/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2228281&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>This year, executives from nearly every major health insurance company made the same declaration in calls with Wall Street analysts: Using artificial intelligence to make coverage decisions would help save them money.
Even the Trump administration is testing AI’s usefulness in managing the prior authorization process for the Medicare program, as well as seeking to override AI regulation by states.
But class action lawsuits have accused insurers of using AI to wrongfully withhold treatment. And outlines the risks of training AI on a current system rife with wrongful denials.
“There is a world in which using AI could make that worse, or at least replicate a bad human system, because the data that it would be training on is from that bad human system,” said Michelle Mello, a co-author of the study.
Although, Mello said, the research team found “real positives alongside the risks.”
In this video produced by ºÚÁϳԹÏÍø News’ Hannah Norman, Darius Tahir, a correspondent covering health technology, explains.
You can read Tahir’s recent coverage of AI’s use by health insurers below:
This <a target="_blank" href="/courts/watch-ai-artificial-intelligence-prior-authorization-insurance-coverage-decisions/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2181021&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>LISTEN: AI scribes are changing medical care. Here’s what to know if the technology shows up at your next doctor’s appointment.
Family physician Eric Boose has been using an artificial intelligence tool to get back to what he calls “old-fashioned medicine” — talking with patients face-to-face, without having to type into a computer at the same time.
“I can really just sit there and engage and just focus on them and listen,” said Boose, who .
Roughly two years ago, he started using an AI notetaker app during patient visits. The tool listens while he talks with patients and then automatically generates a visit summary based on the conversation. The summary is usually ready within seconds after the appointment ends.
“It’s taking care of all that tedious work of charting and taking notes during the visit,” he said. “It’s just freeing up a lot more time to get that done, and I can get home to my family earlier.”
Nearly a third of physician practices are using AI scribes and others are working to add the tool, in an effort to cut down on administrative work.
If your practitioner suggests using an AI scribe at your next appointment, here are three things to keep in mind:
1. Clinicians should ask for your permission.
At the start of an appointment, your doctor might ask something like, “Are you OK if I use an AI scribe to help me take notes during this appointment?” A common practice is to accept verbal, not written, consent from patients before turning the tool on. However, the legal requirements for getting permission to record a patient conversation vary by state.
Boose said you can ask to pause the AI scribe at any point, especially to discuss something sensitive. And if you decline altogether, your practitioner will likely return to taking manual notes on a computer.
2. AI scribes make mistakes too, so check their work.
Like other AI tools, medical scribes can “hallucinate,” or spontaneously add errors into a record. AI scribes can also omit important information or miss context clues within a conversation.
Clinicians are supposed to review and edit the AI-generated visit summaries before adding them to a patient’s record. As a patient, it’s a good practice to carefully review your visit summary and contact your health provider if you notice errors.
3. Yes, the AI company could use your data, with limitations.
Companies and health systems that offer AI scribe tools have access to medical data and are subject to federal standards about how they use and store patient data, under the Health Insurance Portability and Accountability Act, more commonly known as HIPAA.
They may use data from your appointment to help improve their software without informing you, said Darius Tahir, who reports on health technology for ºÚÁϳԹÏÍø News. “ If information is ‘de-identified,’ which can mean stripping it of identifiers [and] making sure it’s not personally traceable back to people, then it is more free to be used in more ways,” he said. “There are way fewer regulatory requirements.”
If you want to know how your data is being used, ask either your practitioner or medical system for more information. But you might not get a clear answer, Tahir said.
People and Policy
The U.S. health care system will likely continue to integrate AI technology into patient care. The Trump administration strongly supports the development and use of AI, especially in health care. In early 2025, President Donald Trump issued reducing existing regulations on AI to help the U.S. “retain global leadership of artificial intelligence.” In December, the U.S. Department of Health and Human Services released an stating that the department supports “integrating AI to modernize care and public health infrastructure to improve health at the individual and population levels.”
Emily Siner at Nashville Public Radio contributed to this report.
HealthQ is a health series from reporters Cara Anthony and Blake Farmer, approachable guides to an unapproachable health care system. It’s a collaboration between Nashville Public Radio and ºÚÁϳԹÏÍø News.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/healthq-ai-scribes-notetaker-doctor-visit-data-privacy/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2173301&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The CDC withheld the data for months as a team hit hard by mass layoffs and resignations sorted through the information. But now that scientists at the agency have posted their first batch of whole measles genomes — the genetic blueprint of the viruses — the rest should “start flowing more smoothly at a more rapid cadence,” said Kristian Andersen, an evolutionary virologist at the Scripps Research Institute who isn’t involved with the CDC’s effort but is following it.
The CDC did not answer queries from ºÚÁϳԹÏÍø News on its timeline for publishing measles data or analyses. However, once all the data is public, researchers can run that will signal whether outbreaks across the U.S. last year resulted from the continuous spread of the disease between states, rather than separate introductions from abroad. If there was continuous transmission for a year, that means the U.S. has lost its status as a country that has eliminated measles. That status, which the U.S. has held since 2000, reflects a country’s vaccination rates: Two doses of the measles-mumps-rubella vaccine prevent most infections and so stop outbreaks from growing.
More careful analyses take weeks.
“We should see a report in April,” Andersen said, “assuming no political interference.”
This is the first time that the U.S. has applied sophisticated genomic techniques to measles, which largely disappeared from the country a quarter-century ago because of broad vaccine uptake.
Declining , misinformation, and the Trump administration’s to outbreaks have fueled a resurgence of the disease. With at least 2,285 cases in 44 states, 2025 was the worst year for measles in more than three decades. This year is on track to surpass that, with 1,575 cases as of late March.
While welcoming the science, researchers say the government’s top priority should be to stop the virus from spreading.
“I think it’s incredibly important to do whole genome sequencing for outbreaks,” Andersen said, “but we shouldn’t need to do this for measles in the first place, because we have an extremely effective and safe vaccine.”
“That we’re even talking about this is nuts,” he added.
Health and Human Services Secretary Robert F. Kennedy Jr. and other government officials should sound an alarm about measles’ comeback and launch nationwide vaccine campaigns, said Rekha Lakshmanan, executive director of , a nonprofit in Houston that advocates for vaccine access.
“I applaud the science,” she said, “but the more urgent need is to get measles under control as quickly as possible.”

Top officials have instead , and false notions about vaccines have been granted new life in Kennedy’s CDC. This includes abrupt changes to vaccine information on CDC websites that say aren’t based on evidence and endanger lives.
Kennedy continues to promote unproven remedies that could mislead parents into believing that they can avoid vaccines without consequence. On the podcast in late February, Kennedy spoke at length about measures to improve America’s health but didn’t mention vaccines. He said preventive measures could entail “holistic medicine, or take vitamins, or take vitamin D, which is, as you know, it’s kind of miraculous.”
“The risk of measles remains low for most of the United States,” HHS spokesperson Emily Hilliard wrote. “CDC has made $8.5 million available to address measles response activities in 7 jurisdictions experiencing outbreaks,” she wrote. “The CDC, HHS principles, and the Secretary have been vocal that the MMR vaccine is the best way to protect yourself against measles.”
1,000 Genomes
In December, the CDC enlisted the help of one of the country’s leading centers for virus sequencing, the Broad Institute in Cambridge, Massachusetts. Major outbreaks in Texas, Utah, and South Carolina had been fueled by the same type of measles virus, labeled D8-9171. But since that type also circulates in Canada and Mexico, researchers need more data to discern whether it spread among states or entered the U.S. multiple times.
Whole genome sequencing provides that information because viruses evolve over time. The measles virus acquires a mutation every two to four transmissions between people, said Bronwyn MacInnis, director of pathogen surveillance at the Broad.
“There is enough signal in this data to tease apart questions at hand,” MacInnis said, “the main one being sustained transmission within this country.”
MacInnis’ team worked overtime to sequence the entire genomes of inactivated measles viruses that had been collected from states in 2025 and 2026.
“We’ve done about 1,000 samples and delivered the genome data back to the CDC,” sending it on a rolling basis since December, MacInnis said. “This is the CDC’s data to publish.”
The CDC didn’t post a single one of those genomes until late March, when eight appeared on a public database hosted by the National Center for Biotechnology Information. By April 1, an additional 154 had gone online.
“It should be on NCBI within a couple of weeks of being produced,” Andersen said, “and certainly not take longer than a month when you have an active outbreak.”
Genomic data holds clues about how outbreaks start and spread. It allows researchers to develop tests, treatments, and vaccines — and detect variants that might evade them.
Such data was critical in the covid pandemic. Chinese and Australian scientists online on Jan. 10, 2020, of sequencing it. “It definitely shouldn’t take the CDC months,” said Eddie Holmes, the Australian virologist who helped publish the first coronavirus sequence.
One reason for the delay is that the CDC’s measles lab has been sorely understaffed amid mass layoffs and other turmoil at the agency over the past year, a CDC scientist told ºÚÁϳԹÏÍø News. Another reason, the researcher added, is a learning curve: The CDC and health departments haven’t needed to sequence hundreds of whole measles genomes before now. (ºÚÁϳԹÏÍø News agreed not to identify the scientist, who feared retaliation.)
In contrast with the CDC, the Utah Public Health Lab has shared measles genomes rapidly. Most of some 970 measles genomes posted online since Jan. 1, 2025, were sequenced by the state, hailing from Utah, Arizona, South Carolina, and other states willing to share them.
“We’ve only got a handful of samples from Texas that were collected kind of in the middle of their outbreak,” said Kelly Oakeson, a genomics researcher at the Utah Department of Health and Human Services. The genomes of the Texas and Utah measles viruses are similar but distinct, Oakeson said, meaning that intermediate versions of the virus are missing.
If the genetic code of viruses collected late in the Texas outbreak are a closer match to those from Utah’s, that will suggest that spread was continuous and the country has lost its measles-free status. The hundreds of genome sequences still sitting at the CDC probably hold the answer.
Waiting on the CDC
The CDC expected to finish its analysis before April, said Daniel Salas, executive manager of the immunization program at the Pan American Health Organization, which works with the World Health Organization. That’s when PAHO was slated to evaluate the United States’ measles status.
He said PAHO delayed its evaluation until the organization’s annual meeting in November, partly because the CDC needed more time to do the genomic analysis and partly because the measles status of Mexico, Bolivia, and other countries is also under review, and holding staggered meetings for each country is inefficient.
The U.S. is the only country using whole genome sequencing to answer the elimination question, Salas said. Typically, countries classify measles viruses according to a tiny snippet of genes, then assume that large outbreaks caused by the same type are linked. Whole genomes provide a more accurate view.
“If the U.S. can fill in the blanks with genomic data, that’s a sort of breakthrough,” Salas said. “That doesn’t mean other countries are going to be able to pull off this kind of analysis,” he added. “It takes a lot of specialized knowledge and resources.”
Equipment to sequence and analyze genomes costs upward of $100,000, and the cost to process each sample, including paying the researchers involved, typically ranges from $100 to $500 per sequence.
“I’m pro-science, but we shouldn’t have to do this,” said Theresa McCarthy Flynn, president of the North Carolina Pediatrics Society. “We don’t have to have a measles epidemic.”

Flynn said she regularly fields questions from parents concerned by misinformation spread by Kennedy and anti-vaccine groups, including the one he founded before joining the Trump administration. Parents have also pointed to changes in the CDC’s recommendations and to its websites that are at odds with the scientific consensus.
Before Kennedy took the helm, a said “Vaccines do not cause autism” in prominent type, and listed in premier scientific journals that refuted a link between vaccines and developmental disorders.
Last year, shifted to saying, “Studies supporting a link have been ignored by health authorities.” The high-quality studies were replaced with a report from a single investigator who has ties to anti-vaccine groups. In an email to ºÚÁϳԹÏÍø News, HHS spokesperson Hilliard echoed the altered website’s claims about vaccines, disregarding extensive studies on the topic.
Flynn, of the pediatrics association, said, “The CDC itself is spreading misinformation about vaccines. I cannot overstate the seriousness of this.”
Although the acting director of the CDC, Jay Bhattacharya, says vaccines are the best way to prevent measles, he too has undermined vaccine policy. He said the controversial to reduce the number of vaccines recommended to children was based on “gold standard science.” In fact, the new schedule makes the among peer nations. Hilliard wrote that the updated schedule was “aligning U.S. guidance with international norms.”
A federal court temporarily invalidated the change last month in a lawsuit brought by the American Academy of Pediatrics and other groups.
Bhattacharya hasn’t held briefings with the public or the press on the surge of measles this year or activated the CDC’s emergency capabilities.
“Normally, we’d have a big push to get vaccination rates up in areas where it’s low. We’d do a big social media push, put out ads on getting vaccinated,” said another CDC scientist whom ºÚÁϳԹÏÍø News agreed not to identify, because of fears of retaliation. “People at the CDC want to do this, but political leadership at the agency has not allowed the CDC to do it.”
Further, the Trump administration’s to public health funds have made it hard for local health officials to protect communities. Philip Huang, director at Dallas County Health and Human Services in Texas, said the department lost over $4 million when the administration clawed back about $11 billion from health departments early last year as a measles outbreak surged in the state.
“We lost 27 staff and had to cancel over 20 of our community vaccination efforts, including to schools identified as having low vaccination rates,” he said. “There are simultaneous attacks on immunizations that are making our jobs harder.”
This <a target="_blank" href="/public-health/measles-genome-cdc-data-elimination-status-outbreaks-rfk/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2177574&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>
ºÚÁϳԹÏÍø News senior correspondent Renuka Rayasam discussed the ºÚÁϳԹÏÍø News series “Priced Out,” which focuses on the health insurance crisis, on An Arm and a Leg on March 19.
ºÚÁϳԹÏÍø News rural health reporter Andrew Jones discussed the spread of measles across the Carolinas on WUNC’s Due South on March 17.
Céline Gounder, ºÚÁϳԹÏÍø News’ editor-at-large for public health, discussed on CBS News 24/7’s The Daily Report on March 16 how U.S. hospitals and insurers are turning to AI to settle disputes over medical claims and payments. On March 17, she outlined the court ruling blocking the Trump administration’s vaccine policy changes for children on CBS News’ CBS Mornings. Gounder also discussed Susie Wiles’ decision to stay on as White House chief of staff amid breast cancer treatment on CBS News 24/7’s The Takeout on March 16.
This <a target="_blank" href="/on-air/on-air-march-21-2026-insurance-prices-measles-spread-ai-vaccine-ruling-susie-wiles/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2171531&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>Regulating artificial intelligence, especially its use by health insurers, is becoming a politically divisive topic, and it’s scrambling traditional partisan lines.
Boosters, led by Trump, are not only pushing its integration into government, as in in prior authorization, but also trying to stop others from building curbs and guardrails. A December seeks to preempt most state efforts to govern AI, describing “a race with adversaries for supremacy” in a new “technological revolution.”
“To win, United States AI companies must be free to innovate without cumbersome regulation,” Trump’s order said. “But excessive State regulation thwarts this imperative.”
Across the nation, states are in revolt. At least four — Arizona, Maryland, Nebraska, and Texas — enacted legislation last year reining in the use of AI in health insurance. Two others, Illinois and California, enacted bills the year before.
Legislators in Rhode Island plan to try again this year after a bill requiring regulators to collect data on technology use failed to clear both chambers last year. A bill in North Carolina requiring insurers not to use AI as the sole basis of a coverage decision attracted significant interest from Republican legislators last year.
DeSantis, a former GOP presidential candidate, has rolled out an “AI Bill of Rights,” include restrictions on its use in processing insurance claims and a requirement allowing a state regulatory body to inspect algorithms.
“We have a responsibility to ensure that new technologies develop in ways that are moral and ethical, in ways that reinforce our American values, not in ways that erode them,” DeSantis said during his State of the State address in January.
Ripe for Regulation
Polling shows Americans are skeptical of AI. A from Fox News found 63% of voters describe themselves as “very” or “extremely” concerned about artificial intelligence, including majorities across the political spectrum. Nearly two-thirds of Democrats and just over 3 in 5 Republicans said they had qualms about AI.
Health insurers’ tactics to hold down costs also trouble the public; from KFF found widespread discontent over issues like prior authorization. (KFF is a health information nonprofit that includes ºÚÁϳԹÏÍø News.) Reporting and in recent years has highlighted the use of algorithms to rapidly deny insurance claims or prior authorization requests, apparently with little review by a doctor.
Last month, the House Ways and Means Committee hauled in executives from Cigna, UnitedHealth Group, and other major health insurers to address concerns about affordability. When pressed, the executives either denied or avoided talking about using the most advanced technology to reject authorization requests or toss out claims.
AI is “never used for a denial,” Cigna CEO David Cordani told lawmakers. Like others in the health insurance industry, the company is being sued for its methods of denying claims, as spotlighted by ProPublica. Cigna spokesperson Justine Sessions said the company’s claims-denial process “is not powered by AI.”
Indeed, companies are at pains to frame AI as a loyal servant. Optum, part of health giant UnitedHealth Group, announced Feb. 4 that it was rolling out tech-powered prior authorization, with plenty of mentions of speedier approvals.
“We’re transforming the prior authorization process to address the friction it causes,” John Kontor, a senior vice president at Optum,
Still, Alex Bores, a computer scientist and New York Assembly member prominent in the state’s legislative debate over AI, which culminated in a comprehensive bill governing the technology, said AI is a natural field to regulate.
“So many people already find the answers that they’re getting from their insurance companies to be inscrutable,” said Bores, a Democrat who is running for Congress. “Adding in a layer that cannot by its nature explain itself doesn’t seem like it’ll be helpful there.”
At least some people in medicine — doctors, for example — are cheering legislators and regulators on. The American Medical Association “supports state regulations seeking greater accountability and transparency from commercial health insurers that use AI and machine learning tools to review prior authorization requests,” said John Whyte, the organization’s CEO.
Whyte said insurers already use AI and “doctors still face delayed patient care, opaque insurer decisions, inconsistent authorization rules, and crushing administrative work.”
Insurers Push Back
With legislation approved or pending in at least nine states, it’s unclear how much of an effect the state laws will have, said University of Minnesota law professor Daniel Schwarcz. States can’t regulate “self-insured” plans, which are used by many employers; only the federal government has that power.
But there are deeper issues, Schwarcz said: Most of the state legislation he’s seen would require a human to sign off on any decision proposed by AI but doesn’t specify what that means.
The laws don’t offer a clear framework for understanding how much review is enough, and over time humans tend to become a little lazy and simply sign off on any suggestions by a computer, he said.
Still, insurers view the spate of bills as a problem. “Broadly speaking, regulatory burden is real,” said Dan Jones, senior vice president for federal affairs at the Alliance of Community Health Plans, a trade group for some nonprofit health insurers. If insurers spend more time working through a patchwork of state and federal laws, he continued, that means “less time that can be spent and invested into what we’re intended to be doing, which is focusing on making sure that patients are getting the right access to care.”
Linda Ujifusa, a Democratic state senator in Rhode Island, said insurers came out last year against the bill she sponsored to restrict AI use in coverage denials. It passed in one chamber, though not the other.
“There’s tremendous opposition” to anything that regulates , she said, and “tremendous opposition” to identifying intermediaries such as private insurers or pharmacy benefit managers “as a problem.”
In a , AHIP, an insurer trade group, advocated for “balanced policies that promote innovation while protecting patients.”
“Health plans recognize that AI has the potential to drive better health care outcomes — enhancing patient experience, closing gaps in care, accelerating innovation, and reducing administrative burden and costs to improve the focus on patient care,” Chris Bond, an AHIP spokesperson, told ºÚÁϳԹÏÍø News. And, he continued, they need a “consistent, national approach anchored in a comprehensive federal AI policy framework.”
Seeking Balance
In California, Newsom has signed some laws regulating AI, including one requiring health insurers to ensure their algorithms are fairly and equitably applied. But the Democratic governor has vetoed others with a broader approach, such as a bill including more mandates about how the technology must work and requirements to disclose its use to regulators, clinicians, and patients upon request.
Chris Micheli, a Sacramento-based lobbyist, said the governor likely wants to ensure the state budget — consistently powered by outsize stock market gains, especially from tech companies — stays flush. That necessitates balance.
Newsom is trying to “ensure that financial spigot continues, and at the same time ensure that there are some protections for California consumers,” he said. He added insurers believe they’re subject to a welter of regulations already.
The Trump administration seems persuaded. The president’s recent executive order proposed to sue and restrict certain federal funding for any state that enacts what it characterized as “excessive” state regulation — with some exceptions, including for policies that protect children.
That order is possibly unconstitutional, said Carmel Shachar, a health policy scholar at Harvard Law School. The source of preemption authority is generally Congress, she said, and federal lawmakers twice took up, but ultimately declined to pass, a provision barring states from regulating AI.
“Based on our previous understanding of federalism and the balance of powers between Congress and the executive, a challenge here would be very likely to succeed,” Shachar said.
Some lawmakers view Trump’s order skeptically at best, noting the administration has been removing guardrails, and preventing others from erecting them, to an extreme degree.
“There isn’t really a question of, should it be federal or should it be state right now?” Bores said. “The question is, should it be state or not at all?”
Do you have an experience navigating prior authorization to get medical treatment that you’d like to share with us for our reporting? .
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/insurance/artificial-intelligence-ai-health-insurance-companies-state-regulation-trump/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2154202&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>During a January White House roundtable touting the first grants to states under a new $50 billion rural health fund, Centers for Medicare & Medicaid Services Administrator Mehmet Oz called the idea “pretty cool.” Later that day, Sen. Bernie Sanders, the independent from Vermont, said it is decidedly . And obstetricians and others chimed in on social media to express alarm, with one political activist calling it a “.”
The disparate responses highlight how excitement over the tech-heavy ideas states pitched in their applications for the federal Rural Health Transformation Program conflicts with the reality that there simply aren’t enough health workers to serve patients in many rural communities. Now, as states prepare to spend their first-year awards, tension is mounting, and nowhere is that strain more visible than in Alabama.
Oz has lauded the state’s proposal to invest in the relatively new technology of robotic ultrasounds.
“Alabama has no OB-GYNs in many of their counties,” Oz said, sitting with President Donald Trump and Cabinet members. The dearth of care, , prompted the proposal to use robots for ultrasounds on pregnant women.
Britta Cedergren directs the and has a firm grip on reality: “No one is using autonomous robots.”
While robotic ultrasounds are a “really neat technology,” she said, they are not yet being used in the state. Instead, clinicians providing obstetric care lean on phone consultations and — when equipment and internet are available — telehealth.
The goal, she said, is to “support places where there is no care.”
Cedergren is part of multiple state maternal and fetal health groups and works daily with doctors, hospitals, and first responders. While enhanced technology is vital for patient care, it’s not a replacement for a well-trained workforce and a coordinated care and data system, she said.
In 2024, the most recent year for which data is available, Alabama’s infant mortality rate was per 1,000 live births. The nationwide rate was 5.5 per 1,000 live births, according to released by the Centers for Disease Control and Prevention.
Hospital-based obstetric unit closures, which often lead to a loss of health care providers who can care for expectant mothers and their babies, are a long-standing, ongoing trend in rural America. But Alabama’s loss of services has been particularly profound.
In 1980, 45 of the state’s 55 rural counties had hospital-based obstetric services. By 2025, , according to state data. And the losses aren’t slowing. Five hospital obstetric units closed in 2023 and 2024, including in three rural counties: Monroe, Marengo, and Clarke.

, a professor at the University of Minnesota School of Public Health, found that closures in remote areas in preterm births, a leading cause of infant mortality.
“People will be pregnant and give birth in communities all over the place,” she said. “You have to be able to get to a place where you can be cared for.”
Nearly all 50 states’ applications for the Rural Health Transformation Program declared workforce shortages and maternal health needs as priorities, but only Alabama proposed using robots to fill the gap. The rural fund, which Congress created as a last-minute sweetener in Trump’s One Big Beautiful Bill Act last summer, encouraged states to be creative, be innovative, and pitch tech solutions.
Alabama was awarded $203 million for the first of the program’s five years. Among nearly a dozen , the state’s application included bolstering its rural workforce as well as improving maternal and fetal health.
Mike Presley, a spokesperson for the , which is overseeing the plan, said no one was available for an interview about telerobotic ultrasounds.
LoRissia Autery, an obstetrics and gynecology specialist in rural Alabama northwest of Birmingham, said the robots won’t decrease maternal and infant mortality. There are nuances, she said, to doing ultrasounds.
Many of her patients have high-risk pregnancies with diabetes, high blood pressure, and hepatitis C, she said. She said she worries about the kind of care that will be given to her patients, many of whom drive an hour or more to get to her, if robots are used instead of a trained specialist.
“It takes away just the care that we need to have for women,” said Autery, who co-founded . The clinic includes three doctors, draws patients from five counties, and could use an additional physician to meet the demand, Autery said.
“Probably for the past six or seven years, we’ve been putting out feelers trying to find a fourth partner,” Autery said. “It’s difficult for a variety of reasons.”
In his social media remarks to Oz, Vermont’s Sanders called the lack of rural health care providers in the U.S. an “international embarrassment.”
“In the richest country on earth, we need more doctors, nurses, dentists and mental health counselors, not more robots,” Sanders wrote on the social platform X.
At least one country is using robots paired with trained workers to decrease deaths.
In the remote Canadian village of La Loche, Julie Fontaine operates an ultrasound robot at a clinic with two on-site nurse practitioners and rotating doctors. She said patients like the robot because it saves them the time and expense of traveling to a bigger regional health care facility six to seven hours away.
“When people come in, they’re like, ‘Wow, like, technology these days,’” said Fontaine, a member of the in northern Saskatchewan. “It’s something they’ve never seen before or even used.”

When working with patients, Fontaine connects the robotic ultrasound machine to a tele-sonographer at a control station in Saskatoon. The sonographer then remotely operates a robotic arm on the machine. A radiologist, who can be anywhere, reads the scan’s report and sends it back to the family doctor in La Loche, said Ivar Mendez, a neurosurgeon and the director of Canada’s . Most babies in Canada, he said, are delivered by family doctors or midwives, not specialists.
“The most important thing is the identification of a high-risk pregnancy early enough so you can intervene,” said Mendez, who added that the robotic ultrasound is “as good as the in-person ultrasound” but can’t be used when a patient needs a more invasive vaginal ultrasound. The mortality rate for mothers and newborns in the north, site of the La Loche clinic, is 20 to 25 times greater than in the rest of the nation, he said.
“One of the reasons is that there’s no availability of prenatal ultrasonography in those communities, so pregnant women have to travel to cities and they’re put up at hotels,” he said.
In a , Mendez and his team at the University of Saskatchewan examined 87 telerobotic ultrasounds and found that 70% of the time, the robotic ultrasound made travel for care unnecessary. Nearly all the patients said they would use the robot again.
The same robotic ultrasound technology was in the U.S.
Nicolas Lefebvre, chairman and chief executive of the robot’s creator and manufacturer, AdEchoTech, said the company has “U.S. maternity-specific projects that are currently under preparation.” The average price of a robot will be $250,000 to $350,000, according to AdEchoTech’s U.S.-based business development consultant.
Using robotic ultrasounds is one part of Alabama’s proposed maternal and fetal health initiative, according to the . Acknowledging loss of hospital obstetric units, officials said they planned to connect smaller rural providers and health care facilities that lack “high-quality maternal and fetal health services” to regional care hubs that can provide the services digitally, including through telerobotic ultrasound.
For their workforce initiative, state officials proposed training programs for doctors, emergency services, and nurse-midwives.
The estimated required funding for the maternal and fetal health initiative is . Alabama officials proposed for their workforce initiative over five years.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/rural-health/alabama-robot-ultrasounds-maternity-care-rural-health-oz/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2150215&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>MacDonald wanted to find a new doctor right away. She needed refills for her blood pressure medications and wanted to book a follow-up appointment after a breast cancer scare.
She called 10 primary care practices near her home in Westwood, Massachusetts. None of the doctors, nurse practitioners, or physician assistants was taking new patients. A few offices told her that a doctor could see her in a year and a half or two years.
“I was just shocked by that, because we live in Boston and we’re supposed to have this great medical care,” said MacDonald, who is in her late 40s and has private health insurance. “I couldn’t get my mind around the fact that we didn’t have any doctors.”
The shortage of primary care providers is a , but it’s particularly acute in Massachusetts. The state’s primary care workforce is shrinking faster than in most states, according to a .
Some health networks, including the state’s largest hospital chain, , are turning to artificial intelligence for solutions.
In September, right when MacDonald was running out of blood pressure medications, MGB launched a new AI-supported program, . MacDonald had received a letter from MGB, telling her no primary care providers in the network were taking new patients for in-person care. At the bottom of the letter was a link to Care Connect.
MacDonald downloaded the app and requested a telehealth appointment with a doctor. She then spent about 10 minutes chatting with an AI agent about why she wanted to see a physician. Afterward, the AI tool sent a summary of the chat to a primary care doctor who could see MacDonald by video.
“I think I got an appointment the next day or two days later,” she said. “It was just such a difference from being told I had to wait two years.”
Round-the-Clock Convenience
MGB says the AI tool can handle patients seeking care for colds, nausea, rashes, sprains, and other common urgent care requests, as well as mild to moderate mental health concerns and issues related to chronic diseases. After the patient types in a description of the symptoms or problem, the AI tool sends a doctor a suggested diagnosis and treatment plan.
Care Connect employs 12 physicians to work with the AI. They log in remotely from around the U.S., and patients can get help round-the-clock, seven days a week.
Care Connect is one of many AI-based tools that hospitals, doctors, and administrative staff are testing for a range of routine medical tasks, including note-taking, reviewing diagnostic results, billing, and ordering supplies.
Proponents argue that these AI programs can help relieve staff burnout and worker shortages by reducing time spent on medical records, referrals, and other administrative tasks. But there’s debate about and to use AI to improve diagnoses. Critics worry that AI agents miss important details about overlapping medical conditions.
Critics also point out that AI tools can’t assess whether patients can afford follow-up care or get to that appointment. They have no insight into family dynamics or caretaking needs, things that primary physicians come to understand through long-term personal relationships.
Since her first foray on the app in September, MacDonald has used Care Connect at least three more times. Two of those interactions led to an eventual conversation with a remote doctor, but when she went online to book an appointment for travel-related shots, she interacted only with the AI chatbot before visiting the travel clinic.
MacDonald likes the convenience.
“I don’t have to leave work,” she said. “And I gained some peace of mind, knowing that I have a plan between now and me finding another in-person doctor.”
So while she hunted for that person, MacDonald planned to stay with Care Connect.
“This is a logical solution in the short term,” MacDonald said. “At the end of the day, it’s the patient who’s feeling the aftermath of all of the bigger things going on in health care.”
Scarcity and Burnout
Many factors contribute to the shortage of providers. Many primary care doctors, such as pediatricians, internists, and family medicine physicians, are dissatisfied with their pay. They earn about , on average, than specialists such as surgeons, cardiologists, and anesthesiologists.
At the same time, their workload has been increasing. Primary care doctors days packed with complex patient visits, followed by evenings spent updating medical records and responding to patient messages.
When MacDonald signed onto Care Connect, she was one of 15,000 patients in the Mass General Brigham system without a primary care provider. That number has grown as primary care doctors have left MGB for rival hospital networks.
, a primary care physician at an MGB health center in Chelsea, Massachusetts, said she’s staying at MGB for now, but she’s grown frustrated with the system’s leaders.
“They don’t make any effort to ease the shortage,” said Rao, who is also part of an MBG’s primary care doctors. “They put their money into specialties. Primary care feels like a peripheral part of the system, when it really should be a central part.”
Last year, MGB pledged to spend $400 million over five years on primary care services — though that includes the multiyear contract with Care Connect.
“Care Connect is just one solution among many in this broader strategy to alleviate the primary care capacity crisis,” , MGB’s chief operating officer, said in an emailed statement. “Our investment supports retaining our current physicians as well as recruiting new ones.”
Walls said MGB has increased staffing support for primary care physicians, implemented other AI tools, and hired a new executive for primary care. Some of these changes are based on recommendations from their own primary care doctors.
But some of those doctors say they would like other changes, and salary increases in particular.
Walls would not disclose the exact amount MGB is spending on Care Connect.
Bridge to Better Care or a ‘Band-Aid’?
MGB has rolled out other AI tools, including one that can transcribe a doctor’s in-person conversations with patients. Rao isn’t using that tool. She worries that patient information could be leaked and medical privacy violated, and she doesn’t want her conversations with patients to be used to help develop the next generation of AI medical tools.
“What if they’re just using my interactions with patients to train their AI and boot me out of my job?” she said.
That’s not the goal, said , a primary care physician who manages the program for MGB. All decisions about patient care are still made by real doctors, she said.
“We are not replacing our in-person primary care,” she said. “It’s still important, and the majority of patients still have in-person primary care.”
But the fear among some primary care doctors at MGB is that Care Connect will gradually erode access to in-person primary care visits. Of the $400 million pledged by MGB for primary care, they want less spent on AI and more used to attract and increase pay for primary care staffers.
, an MGB internist who is also involved in the unionizing effort, said the use of Care Connect can only fill a gap. “That sounds like a band-aid for a broken system to me,” he said.
Expanding AI Tools
As of mid-December, the Care Connect doctors were each seeing 40 to 50 patients a day. By February, the MGB network plans to make Care Connect available to all Massachusetts and New Hampshire residents who have health insurance, and to hire more doctors to staff the program as needed.
Patients can use the program like an urgent care service, Ireland said. They can also decide to make one of the remote doctors their permanent primary care provider.
“Some patients want in-person care,” Ireland said. “But I do believe there’s a subset of patients who will appreciate the 24-hour, seven-day-a-week model and choose to be a part of this.”
Care Connect isn’t for patients who need emergency care or a physical exam, she said. And patients who need tests or imaging are referred to the network’s clinics or labs.
But the remote doctors can manage some of the same routine issues that all primary care doctors do, Ireland said, including moderate respiratory infections, allergies, and chronic conditions such as diabetes, high cholesterol, and depression.
says only immediate, not ongoing, health problems should be on that list. Lin is chief of primary care at the Stanford University School of Medicine and founded Stanford’s Healthcare AI Applied Research Team.
“In its current state, the safest use of this tool is for more urgent care issues,” Lin said. “Your upper respiratory tract infections. Your urinary tract infections. Your musculoskeletal injuries. Your rashes.”
For patients with multiple chronic conditions such as high blood pressure and diabetes — or for patients with especially serious conditions like heart disease or cancer — Lin said nothing beats a human who sees you regularly.
Still, Lin agrees that the chat summary generated after an AI encounter can help a physician be more efficient. For patients, Lin understands the practical appeal of a virtual option.
“I would rather these patients get care, if that care can be safe,” he said, “than not get care at all.”
The company that developed the AI platform for Care Connect, , contends the program is delivering safe, effective care to patients with complex, chronic ailments — many of whom have no other option besides a hospital emergency room.
“America’s got a big problem with health care, issues with cost, quality, and access,” said , the company’s CEO. “To solve it, you need to start with primary care, and you have to use technology and AI.”
In addition to Mass General Brigham, K Health partners with five other health networks, including the highly ranked and Los Angeles-based .
In a funded by K Health, Cedars-Sinai researchers compared several hundred diagnosis and treatment recommendations made by AI with those made by physicians.
The researchers found the AI to be slightly better at identifying “critical red flags” and recommending care based on clinical guidelines, though the physicians were better at adjusting their treatment recommendations as they spoke more with the patient.
This article is from a partnership that includes , , and ºÚÁϳԹÏÍø News.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/news/ai-primary-care-doctors-shortages-massachusetts-mass-general-brigham/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2150222&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The Office of Personnel Management 65 insurance companies to provide monthly reports with detailed medical and pharmaceutical claims data of more than 8 million people enrolled in federal health plans, ºÚÁϳԹÏÍø News reported earlier this month. The request, which could dramatically expand the personally identifiable medical information OPM can access, alarmed health ethicists, insurance company executives, and privacy advocates.
Now, OPM Director Scott Kupor has two letters on his desk — one from 16 U.S. senators and another led by Rep. Robert Garcia, the top Democrat on the House Oversight Committee — asking him to drop the agency’s proposal.
“The collection of broad, personally identifiable data regarding medical care and treatment raises concerns that OPM could target certain federal employees seeking vital health care services that the Administration disagrees with on political grounds,” the Democratic House members , citing ºÚÁϳԹÏÍø News.
The letters from congressional Democrats alone are unlikely to reverse OPM’s plans. Republicans — who control Congress and, ultimately, any oversight activities — have not weighed in on OPM’s notice.
OPM did not immediately respond to a request for comment on the letters. The agency, which said in its notice that it will use the data for oversight and to manage the federal health plans, has not publicly addressed written concerns about its proposal.
The notice, posted and sent to insurers in December, states that insurers are legally permitted to disclose “protected health information” to OPM and does not provide instructions to redact identifying information, such as names or diagnoses, from the claims.
That data could be used to implement cost-saving measures, health policy experts told ºÚÁϳԹÏÍø News earlier this month. But it would also give the Trump administration — which has laid off or fired tens of thousands of federal workers — access to a vast trove of personal information.
In the letters, Democratic lawmakers lay out a number of concerns about potential consequences of OPM’s obtaining detailed medical claims for millions of federal workers.
The — led by Adam Schiff of California and Mark Warner of Virginia — argues that OPM is not equipped to safeguard such sensitive data and that the administration could share the records across government agencies, as it has done with personal information on millions of Medicaid enrollees.
They also assert that the agency does not have a legal right to the data and that insurers’ sharing the information with OPM would “violate the core principles of the Health Insurance Portability and Accountability Act.” HIPAA requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent. The proposal, the senators warn, threatens patients’ relationships with their clinicians, especially “sensitive disclosures regarding mental health, chronic illness, or other deeply personal conditions.”
“For these reasons, we strongly urge you to cease any further consideration of this proposal,” states the letter, which was sent to Kupor on April 19.
The American Federation of Government Employees, the largest union for federal employees, to ºÚÁϳԹÏÍø News’ reporting. The union noted in a statement from its national president, Everett Kelley, that OPM’s proposal “comes in the context of coordinated attacks on federal employees and repeated stretching of the legal boundaries for sharing sensitive personal data across government agencies.
“The question of what this administration intends to do with eight million Americans’ most private health information is not academic,” the AFGE statement read. “It is urgent.”
In an emailed statement, Kelley applauded the congressional letters.
“We are pleased that Democratic lawmakers on the Hill are just as outraged as we are over this administration’s blatant attempt to breach the privacy of millions of Americans across the country,” Kelley wrote. “We share their concerns regarding potential misuse of the information to continue illegally targeting workers and their demand for OPM to withdraw this proposal.”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/opm-federal-workers-health-records-hipaa-democratic-letters/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2228955&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>LISTEN: Quashing innovation or risking a patient’s health? Lauren Sausser told WAMU’s Health Hub on April 15 why the White House and some states are at odds over how to regulate AI in health care.
Speed, efficiency, and lower costs. Those are the traits artificial intelligence supporters celebrate. But the same qualities worry physicians who fear the technology could lead to insurance denials with humans left out of the loop.
With scant federal regulation, states are left to shape the rules on AI in health care. For residents in the Washington, D.C., metropolitan area, a divide is playing out on opposite sides of the Potomac River. Maryland and Virginia have taken very different approaches to regulating AI in health insurance.
ºÚÁϳԹÏÍø News correspondent Lauren Sausser joined WAMU’s Health Hub on April 15 to explain why where you live may determine how much of a role AI plays in your coverage.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/wamu-health-hub-ai-state-regulation-april-15-2026/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2228242&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>If you or someone you know may be experiencing a mental health crisis, contact the 988 Suicide & Crisis Lifeline by dialing or texting “988.”
Vince Lahey of Carefree, Arizona, embraces chatbots. From Big Tech products to “shady” ones, they offer “someone that I could share more secrets with than my therapist.”
He especially likes the apps for feedback and support, even though sometimes they berate him or lead him to fight with his ex-wife. “I feel more inclined to share more,” Lahey said. “I don’t care about their perception of me.”
There are a lot of people like Lahey.
Demand for mental health care has grown. Self-reported poor mental health days rose by 25% since the 1990s, analyzing survey data. According to the Centers for Disease Control and Prevention, suicide rates in 2022 that hadn’t been seen in nearly 80 years.
There are many patients who find a nonhuman therapist, powered by artificial intelligence, highly appealing — more appealing than a human with a reclining couch and stern manner. with begging for a therapist who’s “not on the clock,” who’s less judgmental, or who’s just less expensive.
Most people who need care don’t get it, said Tom Insel, former head of the National Institute of Mental Health, citing his former agency’s research. Of those who do, 40% receive “minimally acceptable care.”
“There’s a massive need for high-quality therapy,” he said. “We’re in a world in which the status quo is really crappy, to use a scientific term.”
Insel said engineers from OpenAI told him last fall that about 5% to 10% of the company’s then-roughly 800 million-strong user base rely on ChatGPT for mental health support.
Polling suggests these AI chatbots may be even more popular among young adults. A KFF poll found about 3 in 10 respondents ages 18 to 29 for mental or emotional health advice in the past year. Uninsured adults were about twice as likely as insured adults to report using AI tools. And nearly 60% of adult respondents who used a chatbot for mental health didn’t follow up with a flesh-and-blood professional.
The App Will Put You on the Couch
A burgeoning industry of apps offers AI therapists with human-like, often unrealistically attractive avatars serving as a sounding board for those experiencing anxiety, depression, and other conditions.
ºÚÁϳԹÏÍø News identified some 45 AI therapy apps in Apple’s App Store in March. While many charge steep prices for their services — one listed an annual plan for $690 — they’re still generally cheaper than talk therapy, which can cost hundreds of dollars an hour without insurance coverage.
On the App Store, “therapy” is often used as a marketing term, with small print noting the apps cannot diagnose or treat disease. One app, branded as OhSofia! AI Therapy Chat, had downloads in the six figures, said OhSofia! founder Anton Ilin in December.
“People are looking for therapy,” Ilin said. On one hand, the product’s name ; on the other, it warns in that it “does not provide medical advice, diagnosis, treatment, or crisis intervention and is not a substitute for professional healthcare services.” Executives don’t think that’s confusing, since there are disclaimers in the app.
The apps promise big results without backup. its users “immediate help during panic attacks.” it was “proven effective by researchers” and that it offers 2.3 times faster relief for anxiety and stress. (It doesn’t say what it’s faster than.)
There are few legislative or regulatory guardrails around how developers refer to their products — or even whether the products are safe or effective, said Vaile Wright, senior director of the office of health care innovation at the American Psychological Association. Even federal patient privacy protections don’t apply, she said.
“Therapy is not a legally protected term,” Wright said. “So, basically, anybody can say that they give therapy.”
Many of the apps “overrepresent themselves,” said John Torous, a psychiatrist and clinical informaticist at Beth Israel Deaconess Medical Center. “Deceiving people that they have received treatment when they really have not has many negative consequences,” including delaying actual care, he said.
States such as Nevada, Illinois, and California are trying to sort out the regulatory disarray, enacting laws forbidding apps from describing their chatbots as AI therapists.
“It’s a profession. People go to school. They get licensed to do it,” said Jovan Jackson, a Nevada legislator, who co-authored an enacted bill banning apps from referring to themselves as mental health professionals.
Underlying the hype, outside researchers and company representatives themselves have told the FDA and Congress that there’s little evidence supporting the efficacy of these products. What studies there are — and some companion-focused chatbots are “consistently poor” at managing crises.
“When it comes to chatbots, we don’t have any good evidence it works,” said Charlotte Blease, a professor at Sweden’s Uppsala University who specializes in trial design for digital health products.
The lack of “good quality” clinical trials stems from the FDA’s failure to provide recommendations about how to test the products, she said. “FDA is offering no rigorous advice on what the standards should be.”
Department of Health and Human Services spokesperson Emily Hilliard said, in response, that “patient safety is the FDA’s highest priority” and that AI-based products are subject to agency regulations requiring the demonstration of “reasonable assurance of safety and effectiveness before they can be marketed in the U.S.”
The Silver-Tongued Apps
Preston Roche, a psychiatry resident who’s , gets lots of questions about whether AI is a good therapist. After trying ChatGPT himself, he said he was “impressed” initially that it was able to use techniques to help him put negative thoughts “on trial.”
But Roche said after seeing posts on social media discussing people developing psychosis or being encouraged to make harmful decisions, he became disillusioned. The bots, he concluded, are sycophantic.
“When I look globally at the responsibilities of a therapist, it just completely fell on its face,” he said.
This sycophancy — the tendency of apps based on large language models to empathize, flatter, or delude their human conversation partner — is inherent to the app design, experts in digital health say.
“The models were developed to answer a question or prompt that you ask and to give you what you’re looking for,” said Insel, the former NIMH director, “and they’re really good at basically affirming what you feel and providing psychological support, like a good friend.”
That’s not what a good therapist does, though. “The point of psychotherapy is mostly to make you address the things that you have been avoiding,” he said.
While polling suggests many users are satisfied with what they’re getting out of ChatGPT and other apps, there have been about the service or encouragement to self-harm.
And or have been filed against OpenAI after ChatGPT users died by suicide or became hospitalized. In most of those cases, the plaintiffs allege they began using the apps for one purpose — like schoolwork — before confiding in them. These cases are being .
Google and the startup Character.ai — which has been funded by Google and has created “avatars” that adopt specific personas, like athletes, celebrities, study buddies, or therapists — are settling other wrongful-death lawsuits, .
OpenAI’s CEO, Sam Altman, has said up to may talk about suicide on ChatGPT.
“We have seen a problem where people that are in fragile psychiatric situations using a model like 4o can get into a worse one,” Altman said in a public question-and-answer session reported by , referring to a particular model of ChatGPT introduced in 2024. “I don’t think this is the last time we’ll face challenges like this with a model.”
An OpenAI spokesperson did not respond to requests for comment.
The company has said it on safeguards, such as referring users to 988, the national suicide hotline. However, the lawsuits against OpenAI argue existing safeguards aren’t good enough, and some research shows the problems are . OpenAI its own data suggesting the opposite.
OpenAI is , offering, early in one case, a variety of defenses ranging from denying that its product caused self-harm to alleging that the defendant misused the product by inducing it to discuss suicide. It has also said it’s working to .
Smaller apps also rely on OpenAI or other AI models to power their products, executives told ºÚÁϳԹÏÍø News. In interviews, startup founders and other experts said they worry that if a company simply imports those models into its own service, it might duplicate whatever safety flaws exist in the original product.
Data Risks
ºÚÁϳԹÏÍø News’ review of the App Store found listed age protections are minimal: Fifteen of the nearly four dozen apps say they could be downloaded by 4-year-old users; an additional 11 say they could be downloaded by those 12 and up.
Privacy standards are opaque. On the App Store, several apps are described as neither tracking personally identifiable data nor sharing it with advertisers — but on their company websites, privacy policies contained contrary descriptions, discussing the use of such data and their disclosure of information to advertisers, like AdMob.
In response to a request for comment, Apple spokesperson Adam Dema to the company’s App Store policies, which bar apps from using health data for advertising and require them to display information about how they use data in general. Dema did not respond to a request for further comment about how Apple enforces these policies.
Researchers and policy advocates said that sharing psychiatric data with social media firms means patients could be profiled. They could be targeted by dodgy treatment firms or charged different prices for goods based on their health.
ºÚÁϳԹÏÍø News contacted several app makers about these discrepancies; two that responded said their privacy policies had been put together in error and pledged to change them to reflect their stances against advertising. (A third, the team at OhSofia!, said simply that they don’t do advertising, though their app’s notes users “may opt out of marketing communications.”)
One executive told ºÚÁϳԹÏÍø News there’s business pressure to maintain access to the data.
“My general feeling is a subscription model is much, much better than any sort of advertising,” said Tim Rubin, the founder of Wellness AI, adding that he’d change the description in his app’s privacy policy.
One investor advised him not to swear off advertising, he said. “They’re like, essentially, that’s the most valuable thing about having an app like this, that data.”
“I think we’re still at the beginning of what’s going to be a revolution in how people seek psychological support and, even in some cases, therapy,” Insel said. “And my concern is that there’s just no framework for any of this.”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/mental-health/ai-chatbots-therapy-big-risks-few-regulations/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2228281&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>This year, executives from nearly every major health insurance company made the same declaration in calls with Wall Street analysts: Using artificial intelligence to make coverage decisions would help save them money.
Even the Trump administration is testing AI’s usefulness in managing the prior authorization process for the Medicare program, as well as seeking to override AI regulation by states.
But class action lawsuits have accused insurers of using AI to wrongfully withhold treatment. And outlines the risks of training AI on a current system rife with wrongful denials.
“There is a world in which using AI could make that worse, or at least replicate a bad human system, because the data that it would be training on is from that bad human system,” said Michelle Mello, a co-author of the study.
Although, Mello said, the research team found “real positives alongside the risks.”
In this video produced by ºÚÁϳԹÏÍø News’ Hannah Norman, Darius Tahir, a correspondent covering health technology, explains.
You can read Tahir’s recent coverage of AI’s use by health insurers below:
This <a target="_blank" href="/courts/watch-ai-artificial-intelligence-prior-authorization-insurance-coverage-decisions/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2181021&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>LISTEN: AI scribes are changing medical care. Here’s what to know if the technology shows up at your next doctor’s appointment.
Family physician Eric Boose has been using an artificial intelligence tool to get back to what he calls “old-fashioned medicine” — talking with patients face-to-face, without having to type into a computer at the same time.
“I can really just sit there and engage and just focus on them and listen,” said Boose, who .
Roughly two years ago, he started using an AI notetaker app during patient visits. The tool listens while he talks with patients and then automatically generates a visit summary based on the conversation. The summary is usually ready within seconds after the appointment ends.
“It’s taking care of all that tedious work of charting and taking notes during the visit,” he said. “It’s just freeing up a lot more time to get that done, and I can get home to my family earlier.”
Nearly a third of physician practices are using AI scribes and others are working to add the tool, in an effort to cut down on administrative work.
If your practitioner suggests using an AI scribe at your next appointment, here are three things to keep in mind:
1. Clinicians should ask for your permission.
At the start of an appointment, your doctor might ask something like, “Are you OK if I use an AI scribe to help me take notes during this appointment?” A common practice is to accept verbal, not written, consent from patients before turning the tool on. However, the legal requirements for getting permission to record a patient conversation vary by state.
Boose said you can ask to pause the AI scribe at any point, especially to discuss something sensitive. And if you decline altogether, your practitioner will likely return to taking manual notes on a computer.
2. AI scribes make mistakes too, so check their work.
Like other AI tools, medical scribes can “hallucinate,” or spontaneously add errors into a record. AI scribes can also omit important information or miss context clues within a conversation.
Clinicians are supposed to review and edit the AI-generated visit summaries before adding them to a patient’s record. As a patient, it’s a good practice to carefully review your visit summary and contact your health provider if you notice errors.
3. Yes, the AI company could use your data, with limitations.
Companies and health systems that offer AI scribe tools have access to medical data and are subject to federal standards about how they use and store patient data, under the Health Insurance Portability and Accountability Act, more commonly known as HIPAA.
They may use data from your appointment to help improve their software without informing you, said Darius Tahir, who reports on health technology for ºÚÁϳԹÏÍø News. “ If information is ‘de-identified,’ which can mean stripping it of identifiers [and] making sure it’s not personally traceable back to people, then it is more free to be used in more ways,” he said. “There are way fewer regulatory requirements.”
If you want to know how your data is being used, ask either your practitioner or medical system for more information. But you might not get a clear answer, Tahir said.
People and Policy
The U.S. health care system will likely continue to integrate AI technology into patient care. The Trump administration strongly supports the development and use of AI, especially in health care. In early 2025, President Donald Trump issued reducing existing regulations on AI to help the U.S. “retain global leadership of artificial intelligence.” In December, the U.S. Department of Health and Human Services released an stating that the department supports “integrating AI to modernize care and public health infrastructure to improve health at the individual and population levels.”
Emily Siner at Nashville Public Radio contributed to this report.
HealthQ is a health series from reporters Cara Anthony and Blake Farmer, approachable guides to an unapproachable health care system. It’s a collaboration between Nashville Public Radio and ºÚÁϳԹÏÍø News.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/healthq-ai-scribes-notetaker-doctor-visit-data-privacy/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2173301&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The CDC withheld the data for months as a team hit hard by mass layoffs and resignations sorted through the information. But now that scientists at the agency have posted their first batch of whole measles genomes — the genetic blueprint of the viruses — the rest should “start flowing more smoothly at a more rapid cadence,” said Kristian Andersen, an evolutionary virologist at the Scripps Research Institute who isn’t involved with the CDC’s effort but is following it.
The CDC did not answer queries from ºÚÁϳԹÏÍø News on its timeline for publishing measles data or analyses. However, once all the data is public, researchers can run that will signal whether outbreaks across the U.S. last year resulted from the continuous spread of the disease between states, rather than separate introductions from abroad. If there was continuous transmission for a year, that means the U.S. has lost its status as a country that has eliminated measles. That status, which the U.S. has held since 2000, reflects a country’s vaccination rates: Two doses of the measles-mumps-rubella vaccine prevent most infections and so stop outbreaks from growing.
More careful analyses take weeks.
“We should see a report in April,” Andersen said, “assuming no political interference.”
This is the first time that the U.S. has applied sophisticated genomic techniques to measles, which largely disappeared from the country a quarter-century ago because of broad vaccine uptake.
Declining , misinformation, and the Trump administration’s to outbreaks have fueled a resurgence of the disease. With at least 2,285 cases in 44 states, 2025 was the worst year for measles in more than three decades. This year is on track to surpass that, with 1,575 cases as of late March.
While welcoming the science, researchers say the government’s top priority should be to stop the virus from spreading.
“I think it’s incredibly important to do whole genome sequencing for outbreaks,” Andersen said, “but we shouldn’t need to do this for measles in the first place, because we have an extremely effective and safe vaccine.”
“That we’re even talking about this is nuts,” he added.
Health and Human Services Secretary Robert F. Kennedy Jr. and other government officials should sound an alarm about measles’ comeback and launch nationwide vaccine campaigns, said Rekha Lakshmanan, executive director of , a nonprofit in Houston that advocates for vaccine access.
“I applaud the science,” she said, “but the more urgent need is to get measles under control as quickly as possible.”

Top officials have instead , and false notions about vaccines have been granted new life in Kennedy’s CDC. This includes abrupt changes to vaccine information on CDC websites that say aren’t based on evidence and endanger lives.
Kennedy continues to promote unproven remedies that could mislead parents into believing that they can avoid vaccines without consequence. On the podcast in late February, Kennedy spoke at length about measures to improve America’s health but didn’t mention vaccines. He said preventive measures could entail “holistic medicine, or take vitamins, or take vitamin D, which is, as you know, it’s kind of miraculous.”
“The risk of measles remains low for most of the United States,” HHS spokesperson Emily Hilliard wrote. “CDC has made $8.5 million available to address measles response activities in 7 jurisdictions experiencing outbreaks,” she wrote. “The CDC, HHS principles, and the Secretary have been vocal that the MMR vaccine is the best way to protect yourself against measles.”
1,000 Genomes
In December, the CDC enlisted the help of one of the country’s leading centers for virus sequencing, the Broad Institute in Cambridge, Massachusetts. Major outbreaks in Texas, Utah, and South Carolina had been fueled by the same type of measles virus, labeled D8-9171. But since that type also circulates in Canada and Mexico, researchers need more data to discern whether it spread among states or entered the U.S. multiple times.
Whole genome sequencing provides that information because viruses evolve over time. The measles virus acquires a mutation every two to four transmissions between people, said Bronwyn MacInnis, director of pathogen surveillance at the Broad.
“There is enough signal in this data to tease apart questions at hand,” MacInnis said, “the main one being sustained transmission within this country.”
MacInnis’ team worked overtime to sequence the entire genomes of inactivated measles viruses that had been collected from states in 2025 and 2026.
“We’ve done about 1,000 samples and delivered the genome data back to the CDC,” sending it on a rolling basis since December, MacInnis said. “This is the CDC’s data to publish.”
The CDC didn’t post a single one of those genomes until late March, when eight appeared on a public database hosted by the National Center for Biotechnology Information. By April 1, an additional 154 had gone online.
“It should be on NCBI within a couple of weeks of being produced,” Andersen said, “and certainly not take longer than a month when you have an active outbreak.”
Genomic data holds clues about how outbreaks start and spread. It allows researchers to develop tests, treatments, and vaccines — and detect variants that might evade them.
Such data was critical in the covid pandemic. Chinese and Australian scientists online on Jan. 10, 2020, of sequencing it. “It definitely shouldn’t take the CDC months,” said Eddie Holmes, the Australian virologist who helped publish the first coronavirus sequence.
One reason for the delay is that the CDC’s measles lab has been sorely understaffed amid mass layoffs and other turmoil at the agency over the past year, a CDC scientist told ºÚÁϳԹÏÍø News. Another reason, the researcher added, is a learning curve: The CDC and health departments haven’t needed to sequence hundreds of whole measles genomes before now. (ºÚÁϳԹÏÍø News agreed not to identify the scientist, who feared retaliation.)
In contrast with the CDC, the Utah Public Health Lab has shared measles genomes rapidly. Most of some 970 measles genomes posted online since Jan. 1, 2025, were sequenced by the state, hailing from Utah, Arizona, South Carolina, and other states willing to share them.
“We’ve only got a handful of samples from Texas that were collected kind of in the middle of their outbreak,” said Kelly Oakeson, a genomics researcher at the Utah Department of Health and Human Services. The genomes of the Texas and Utah measles viruses are similar but distinct, Oakeson said, meaning that intermediate versions of the virus are missing.
If the genetic code of viruses collected late in the Texas outbreak are a closer match to those from Utah’s, that will suggest that spread was continuous and the country has lost its measles-free status. The hundreds of genome sequences still sitting at the CDC probably hold the answer.
Waiting on the CDC
The CDC expected to finish its analysis before April, said Daniel Salas, executive manager of the immunization program at the Pan American Health Organization, which works with the World Health Organization. That’s when PAHO was slated to evaluate the United States’ measles status.
He said PAHO delayed its evaluation until the organization’s annual meeting in November, partly because the CDC needed more time to do the genomic analysis and partly because the measles status of Mexico, Bolivia, and other countries is also under review, and holding staggered meetings for each country is inefficient.
The U.S. is the only country using whole genome sequencing to answer the elimination question, Salas said. Typically, countries classify measles viruses according to a tiny snippet of genes, then assume that large outbreaks caused by the same type are linked. Whole genomes provide a more accurate view.
“If the U.S. can fill in the blanks with genomic data, that’s a sort of breakthrough,” Salas said. “That doesn’t mean other countries are going to be able to pull off this kind of analysis,” he added. “It takes a lot of specialized knowledge and resources.”
Equipment to sequence and analyze genomes costs upward of $100,000, and the cost to process each sample, including paying the researchers involved, typically ranges from $100 to $500 per sequence.
“I’m pro-science, but we shouldn’t have to do this,” said Theresa McCarthy Flynn, president of the North Carolina Pediatrics Society. “We don’t have to have a measles epidemic.”

Flynn said she regularly fields questions from parents concerned by misinformation spread by Kennedy and anti-vaccine groups, including the one he founded before joining the Trump administration. Parents have also pointed to changes in the CDC’s recommendations and to its websites that are at odds with the scientific consensus.
Before Kennedy took the helm, a said “Vaccines do not cause autism” in prominent type, and listed in premier scientific journals that refuted a link between vaccines and developmental disorders.
Last year, shifted to saying, “Studies supporting a link have been ignored by health authorities.” The high-quality studies were replaced with a report from a single investigator who has ties to anti-vaccine groups. In an email to ºÚÁϳԹÏÍø News, HHS spokesperson Hilliard echoed the altered website’s claims about vaccines, disregarding extensive studies on the topic.
Flynn, of the pediatrics association, said, “The CDC itself is spreading misinformation about vaccines. I cannot overstate the seriousness of this.”
Although the acting director of the CDC, Jay Bhattacharya, says vaccines are the best way to prevent measles, he too has undermined vaccine policy. He said the controversial to reduce the number of vaccines recommended to children was based on “gold standard science.” In fact, the new schedule makes the among peer nations. Hilliard wrote that the updated schedule was “aligning U.S. guidance with international norms.”
A federal court temporarily invalidated the change last month in a lawsuit brought by the American Academy of Pediatrics and other groups.
Bhattacharya hasn’t held briefings with the public or the press on the surge of measles this year or activated the CDC’s emergency capabilities.
“Normally, we’d have a big push to get vaccination rates up in areas where it’s low. We’d do a big social media push, put out ads on getting vaccinated,” said another CDC scientist whom ºÚÁϳԹÏÍø News agreed not to identify, because of fears of retaliation. “People at the CDC want to do this, but political leadership at the agency has not allowed the CDC to do it.”
Further, the Trump administration’s to public health funds have made it hard for local health officials to protect communities. Philip Huang, director at Dallas County Health and Human Services in Texas, said the department lost over $4 million when the administration clawed back about $11 billion from health departments early last year as a measles outbreak surged in the state.
“We lost 27 staff and had to cancel over 20 of our community vaccination efforts, including to schools identified as having low vaccination rates,” he said. “There are simultaneous attacks on immunizations that are making our jobs harder.”
This <a target="_blank" href="/public-health/measles-genome-cdc-data-elimination-status-outbreaks-rfk/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2177574&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>
ºÚÁϳԹÏÍø News senior correspondent Renuka Rayasam discussed the ºÚÁϳԹÏÍø News series “Priced Out,” which focuses on the health insurance crisis, on An Arm and a Leg on March 19.
ºÚÁϳԹÏÍø News rural health reporter Andrew Jones discussed the spread of measles across the Carolinas on WUNC’s Due South on March 17.
Céline Gounder, ºÚÁϳԹÏÍø News’ editor-at-large for public health, discussed on CBS News 24/7’s The Daily Report on March 16 how U.S. hospitals and insurers are turning to AI to settle disputes over medical claims and payments. On March 17, she outlined the court ruling blocking the Trump administration’s vaccine policy changes for children on CBS News’ CBS Mornings. Gounder also discussed Susie Wiles’ decision to stay on as White House chief of staff amid breast cancer treatment on CBS News 24/7’s The Takeout on March 16.
This <a target="_blank" href="/on-air/on-air-march-21-2026-insurance-prices-measles-spread-ai-vaccine-ruling-susie-wiles/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2171531&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>Regulating artificial intelligence, especially its use by health insurers, is becoming a politically divisive topic, and it’s scrambling traditional partisan lines.
Boosters, led by Trump, are not only pushing its integration into government, as in in prior authorization, but also trying to stop others from building curbs and guardrails. A December seeks to preempt most state efforts to govern AI, describing “a race with adversaries for supremacy” in a new “technological revolution.”
“To win, United States AI companies must be free to innovate without cumbersome regulation,” Trump’s order said. “But excessive State regulation thwarts this imperative.”
Across the nation, states are in revolt. At least four — Arizona, Maryland, Nebraska, and Texas — enacted legislation last year reining in the use of AI in health insurance. Two others, Illinois and California, enacted bills the year before.
Legislators in Rhode Island plan to try again this year after a bill requiring regulators to collect data on technology use failed to clear both chambers last year. A bill in North Carolina requiring insurers not to use AI as the sole basis of a coverage decision attracted significant interest from Republican legislators last year.
DeSantis, a former GOP presidential candidate, has rolled out an “AI Bill of Rights,” include restrictions on its use in processing insurance claims and a requirement allowing a state regulatory body to inspect algorithms.
“We have a responsibility to ensure that new technologies develop in ways that are moral and ethical, in ways that reinforce our American values, not in ways that erode them,” DeSantis said during his State of the State address in January.
Ripe for Regulation
Polling shows Americans are skeptical of AI. A from Fox News found 63% of voters describe themselves as “very” or “extremely” concerned about artificial intelligence, including majorities across the political spectrum. Nearly two-thirds of Democrats and just over 3 in 5 Republicans said they had qualms about AI.
Health insurers’ tactics to hold down costs also trouble the public; from KFF found widespread discontent over issues like prior authorization. (KFF is a health information nonprofit that includes ºÚÁϳԹÏÍø News.) Reporting and in recent years has highlighted the use of algorithms to rapidly deny insurance claims or prior authorization requests, apparently with little review by a doctor.
Last month, the House Ways and Means Committee hauled in executives from Cigna, UnitedHealth Group, and other major health insurers to address concerns about affordability. When pressed, the executives either denied or avoided talking about using the most advanced technology to reject authorization requests or toss out claims.
AI is “never used for a denial,” Cigna CEO David Cordani told lawmakers. Like others in the health insurance industry, the company is being sued for its methods of denying claims, as spotlighted by ProPublica. Cigna spokesperson Justine Sessions said the company’s claims-denial process “is not powered by AI.”
Indeed, companies are at pains to frame AI as a loyal servant. Optum, part of health giant UnitedHealth Group, announced Feb. 4 that it was rolling out tech-powered prior authorization, with plenty of mentions of speedier approvals.
“We’re transforming the prior authorization process to address the friction it causes,” John Kontor, a senior vice president at Optum,
Still, Alex Bores, a computer scientist and New York Assembly member prominent in the state’s legislative debate over AI, which culminated in a comprehensive bill governing the technology, said AI is a natural field to regulate.
“So many people already find the answers that they’re getting from their insurance companies to be inscrutable,” said Bores, a Democrat who is running for Congress. “Adding in a layer that cannot by its nature explain itself doesn’t seem like it’ll be helpful there.”
At least some people in medicine — doctors, for example — are cheering legislators and regulators on. The American Medical Association “supports state regulations seeking greater accountability and transparency from commercial health insurers that use AI and machine learning tools to review prior authorization requests,” said John Whyte, the organization’s CEO.
Whyte said insurers already use AI and “doctors still face delayed patient care, opaque insurer decisions, inconsistent authorization rules, and crushing administrative work.”
Insurers Push Back
With legislation approved or pending in at least nine states, it’s unclear how much of an effect the state laws will have, said University of Minnesota law professor Daniel Schwarcz. States can’t regulate “self-insured” plans, which are used by many employers; only the federal government has that power.
But there are deeper issues, Schwarcz said: Most of the state legislation he’s seen would require a human to sign off on any decision proposed by AI but doesn’t specify what that means.
The laws don’t offer a clear framework for understanding how much review is enough, and over time humans tend to become a little lazy and simply sign off on any suggestions by a computer, he said.
Still, insurers view the spate of bills as a problem. “Broadly speaking, regulatory burden is real,” said Dan Jones, senior vice president for federal affairs at the Alliance of Community Health Plans, a trade group for some nonprofit health insurers. If insurers spend more time working through a patchwork of state and federal laws, he continued, that means “less time that can be spent and invested into what we’re intended to be doing, which is focusing on making sure that patients are getting the right access to care.”
Linda Ujifusa, a Democratic state senator in Rhode Island, said insurers came out last year against the bill she sponsored to restrict AI use in coverage denials. It passed in one chamber, though not the other.
“There’s tremendous opposition” to anything that regulates , she said, and “tremendous opposition” to identifying intermediaries such as private insurers or pharmacy benefit managers “as a problem.”
In a , AHIP, an insurer trade group, advocated for “balanced policies that promote innovation while protecting patients.”
“Health plans recognize that AI has the potential to drive better health care outcomes — enhancing patient experience, closing gaps in care, accelerating innovation, and reducing administrative burden and costs to improve the focus on patient care,” Chris Bond, an AHIP spokesperson, told ºÚÁϳԹÏÍø News. And, he continued, they need a “consistent, national approach anchored in a comprehensive federal AI policy framework.”
Seeking Balance
In California, Newsom has signed some laws regulating AI, including one requiring health insurers to ensure their algorithms are fairly and equitably applied. But the Democratic governor has vetoed others with a broader approach, such as a bill including more mandates about how the technology must work and requirements to disclose its use to regulators, clinicians, and patients upon request.
Chris Micheli, a Sacramento-based lobbyist, said the governor likely wants to ensure the state budget — consistently powered by outsize stock market gains, especially from tech companies — stays flush. That necessitates balance.
Newsom is trying to “ensure that financial spigot continues, and at the same time ensure that there are some protections for California consumers,” he said. He added insurers believe they’re subject to a welter of regulations already.
The Trump administration seems persuaded. The president’s recent executive order proposed to sue and restrict certain federal funding for any state that enacts what it characterized as “excessive” state regulation — with some exceptions, including for policies that protect children.
That order is possibly unconstitutional, said Carmel Shachar, a health policy scholar at Harvard Law School. The source of preemption authority is generally Congress, she said, and federal lawmakers twice took up, but ultimately declined to pass, a provision barring states from regulating AI.
“Based on our previous understanding of federalism and the balance of powers between Congress and the executive, a challenge here would be very likely to succeed,” Shachar said.
Some lawmakers view Trump’s order skeptically at best, noting the administration has been removing guardrails, and preventing others from erecting them, to an extreme degree.
“There isn’t really a question of, should it be federal or should it be state right now?” Bores said. “The question is, should it be state or not at all?”
Do you have an experience navigating prior authorization to get medical treatment that you’d like to share with us for our reporting? .
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/insurance/artificial-intelligence-ai-health-insurance-companies-state-regulation-trump/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2154202&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>During a January White House roundtable touting the first grants to states under a new $50 billion rural health fund, Centers for Medicare & Medicaid Services Administrator Mehmet Oz called the idea “pretty cool.” Later that day, Sen. Bernie Sanders, the independent from Vermont, said it is decidedly . And obstetricians and others chimed in on social media to express alarm, with one political activist calling it a “.”
The disparate responses highlight how excitement over the tech-heavy ideas states pitched in their applications for the federal Rural Health Transformation Program conflicts with the reality that there simply aren’t enough health workers to serve patients in many rural communities. Now, as states prepare to spend their first-year awards, tension is mounting, and nowhere is that strain more visible than in Alabama.
Oz has lauded the state’s proposal to invest in the relatively new technology of robotic ultrasounds.
“Alabama has no OB-GYNs in many of their counties,” Oz said, sitting with President Donald Trump and Cabinet members. The dearth of care, , prompted the proposal to use robots for ultrasounds on pregnant women.
Britta Cedergren directs the and has a firm grip on reality: “No one is using autonomous robots.”
While robotic ultrasounds are a “really neat technology,” she said, they are not yet being used in the state. Instead, clinicians providing obstetric care lean on phone consultations and — when equipment and internet are available — telehealth.
The goal, she said, is to “support places where there is no care.”
Cedergren is part of multiple state maternal and fetal health groups and works daily with doctors, hospitals, and first responders. While enhanced technology is vital for patient care, it’s not a replacement for a well-trained workforce and a coordinated care and data system, she said.
In 2024, the most recent year for which data is available, Alabama’s infant mortality rate was per 1,000 live births. The nationwide rate was 5.5 per 1,000 live births, according to released by the Centers for Disease Control and Prevention.
Hospital-based obstetric unit closures, which often lead to a loss of health care providers who can care for expectant mothers and their babies, are a long-standing, ongoing trend in rural America. But Alabama’s loss of services has been particularly profound.
In 1980, 45 of the state’s 55 rural counties had hospital-based obstetric services. By 2025, , according to state data. And the losses aren’t slowing. Five hospital obstetric units closed in 2023 and 2024, including in three rural counties: Monroe, Marengo, and Clarke.

, a professor at the University of Minnesota School of Public Health, found that closures in remote areas in preterm births, a leading cause of infant mortality.
“People will be pregnant and give birth in communities all over the place,” she said. “You have to be able to get to a place where you can be cared for.”
Nearly all 50 states’ applications for the Rural Health Transformation Program declared workforce shortages and maternal health needs as priorities, but only Alabama proposed using robots to fill the gap. The rural fund, which Congress created as a last-minute sweetener in Trump’s One Big Beautiful Bill Act last summer, encouraged states to be creative, be innovative, and pitch tech solutions.
Alabama was awarded $203 million for the first of the program’s five years. Among nearly a dozen , the state’s application included bolstering its rural workforce as well as improving maternal and fetal health.
Mike Presley, a spokesperson for the , which is overseeing the plan, said no one was available for an interview about telerobotic ultrasounds.
LoRissia Autery, an obstetrics and gynecology specialist in rural Alabama northwest of Birmingham, said the robots won’t decrease maternal and infant mortality. There are nuances, she said, to doing ultrasounds.
Many of her patients have high-risk pregnancies with diabetes, high blood pressure, and hepatitis C, she said. She said she worries about the kind of care that will be given to her patients, many of whom drive an hour or more to get to her, if robots are used instead of a trained specialist.
“It takes away just the care that we need to have for women,” said Autery, who co-founded . The clinic includes three doctors, draws patients from five counties, and could use an additional physician to meet the demand, Autery said.
“Probably for the past six or seven years, we’ve been putting out feelers trying to find a fourth partner,” Autery said. “It’s difficult for a variety of reasons.”
In his social media remarks to Oz, Vermont’s Sanders called the lack of rural health care providers in the U.S. an “international embarrassment.”
“In the richest country on earth, we need more doctors, nurses, dentists and mental health counselors, not more robots,” Sanders wrote on the social platform X.
At least one country is using robots paired with trained workers to decrease deaths.
In the remote Canadian village of La Loche, Julie Fontaine operates an ultrasound robot at a clinic with two on-site nurse practitioners and rotating doctors. She said patients like the robot because it saves them the time and expense of traveling to a bigger regional health care facility six to seven hours away.
“When people come in, they’re like, ‘Wow, like, technology these days,’” said Fontaine, a member of the in northern Saskatchewan. “It’s something they’ve never seen before or even used.”

When working with patients, Fontaine connects the robotic ultrasound machine to a tele-sonographer at a control station in Saskatoon. The sonographer then remotely operates a robotic arm on the machine. A radiologist, who can be anywhere, reads the scan’s report and sends it back to the family doctor in La Loche, said Ivar Mendez, a neurosurgeon and the director of Canada’s . Most babies in Canada, he said, are delivered by family doctors or midwives, not specialists.
“The most important thing is the identification of a high-risk pregnancy early enough so you can intervene,” said Mendez, who added that the robotic ultrasound is “as good as the in-person ultrasound” but can’t be used when a patient needs a more invasive vaginal ultrasound. The mortality rate for mothers and newborns in the north, site of the La Loche clinic, is 20 to 25 times greater than in the rest of the nation, he said.
“One of the reasons is that there’s no availability of prenatal ultrasonography in those communities, so pregnant women have to travel to cities and they’re put up at hotels,” he said.
In a , Mendez and his team at the University of Saskatchewan examined 87 telerobotic ultrasounds and found that 70% of the time, the robotic ultrasound made travel for care unnecessary. Nearly all the patients said they would use the robot again.
The same robotic ultrasound technology was in the U.S.
Nicolas Lefebvre, chairman and chief executive of the robot’s creator and manufacturer, AdEchoTech, said the company has “U.S. maternity-specific projects that are currently under preparation.” The average price of a robot will be $250,000 to $350,000, according to AdEchoTech’s U.S.-based business development consultant.
Using robotic ultrasounds is one part of Alabama’s proposed maternal and fetal health initiative, according to the . Acknowledging loss of hospital obstetric units, officials said they planned to connect smaller rural providers and health care facilities that lack “high-quality maternal and fetal health services” to regional care hubs that can provide the services digitally, including through telerobotic ultrasound.
For their workforce initiative, state officials proposed training programs for doctors, emergency services, and nurse-midwives.
The estimated required funding for the maternal and fetal health initiative is . Alabama officials proposed for their workforce initiative over five years.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/rural-health/alabama-robot-ultrasounds-maternity-care-rural-health-oz/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2150215&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>MacDonald wanted to find a new doctor right away. She needed refills for her blood pressure medications and wanted to book a follow-up appointment after a breast cancer scare.
She called 10 primary care practices near her home in Westwood, Massachusetts. None of the doctors, nurse practitioners, or physician assistants was taking new patients. A few offices told her that a doctor could see her in a year and a half or two years.
“I was just shocked by that, because we live in Boston and we’re supposed to have this great medical care,” said MacDonald, who is in her late 40s and has private health insurance. “I couldn’t get my mind around the fact that we didn’t have any doctors.”
The shortage of primary care providers is a , but it’s particularly acute in Massachusetts. The state’s primary care workforce is shrinking faster than in most states, according to a .
Some health networks, including the state’s largest hospital chain, , are turning to artificial intelligence for solutions.
In September, right when MacDonald was running out of blood pressure medications, MGB launched a new AI-supported program, . MacDonald had received a letter from MGB, telling her no primary care providers in the network were taking new patients for in-person care. At the bottom of the letter was a link to Care Connect.
MacDonald downloaded the app and requested a telehealth appointment with a doctor. She then spent about 10 minutes chatting with an AI agent about why she wanted to see a physician. Afterward, the AI tool sent a summary of the chat to a primary care doctor who could see MacDonald by video.
“I think I got an appointment the next day or two days later,” she said. “It was just such a difference from being told I had to wait two years.”
Round-the-Clock Convenience
MGB says the AI tool can handle patients seeking care for colds, nausea, rashes, sprains, and other common urgent care requests, as well as mild to moderate mental health concerns and issues related to chronic diseases. After the patient types in a description of the symptoms or problem, the AI tool sends a doctor a suggested diagnosis and treatment plan.
Care Connect employs 12 physicians to work with the AI. They log in remotely from around the U.S., and patients can get help round-the-clock, seven days a week.
Care Connect is one of many AI-based tools that hospitals, doctors, and administrative staff are testing for a range of routine medical tasks, including note-taking, reviewing diagnostic results, billing, and ordering supplies.
Proponents argue that these AI programs can help relieve staff burnout and worker shortages by reducing time spent on medical records, referrals, and other administrative tasks. But there’s debate about and to use AI to improve diagnoses. Critics worry that AI agents miss important details about overlapping medical conditions.
Critics also point out that AI tools can’t assess whether patients can afford follow-up care or get to that appointment. They have no insight into family dynamics or caretaking needs, things that primary physicians come to understand through long-term personal relationships.
Since her first foray on the app in September, MacDonald has used Care Connect at least three more times. Two of those interactions led to an eventual conversation with a remote doctor, but when she went online to book an appointment for travel-related shots, she interacted only with the AI chatbot before visiting the travel clinic.
MacDonald likes the convenience.
“I don’t have to leave work,” she said. “And I gained some peace of mind, knowing that I have a plan between now and me finding another in-person doctor.”
So while she hunted for that person, MacDonald planned to stay with Care Connect.
“This is a logical solution in the short term,” MacDonald said. “At the end of the day, it’s the patient who’s feeling the aftermath of all of the bigger things going on in health care.”
Scarcity and Burnout
Many factors contribute to the shortage of providers. Many primary care doctors, such as pediatricians, internists, and family medicine physicians, are dissatisfied with their pay. They earn about , on average, than specialists such as surgeons, cardiologists, and anesthesiologists.
At the same time, their workload has been increasing. Primary care doctors days packed with complex patient visits, followed by evenings spent updating medical records and responding to patient messages.
When MacDonald signed onto Care Connect, she was one of 15,000 patients in the Mass General Brigham system without a primary care provider. That number has grown as primary care doctors have left MGB for rival hospital networks.
, a primary care physician at an MGB health center in Chelsea, Massachusetts, said she’s staying at MGB for now, but she’s grown frustrated with the system’s leaders.
“They don’t make any effort to ease the shortage,” said Rao, who is also part of an MBG’s primary care doctors. “They put their money into specialties. Primary care feels like a peripheral part of the system, when it really should be a central part.”
Last year, MGB pledged to spend $400 million over five years on primary care services — though that includes the multiyear contract with Care Connect.
“Care Connect is just one solution among many in this broader strategy to alleviate the primary care capacity crisis,” , MGB’s chief operating officer, said in an emailed statement. “Our investment supports retaining our current physicians as well as recruiting new ones.”
Walls said MGB has increased staffing support for primary care physicians, implemented other AI tools, and hired a new executive for primary care. Some of these changes are based on recommendations from their own primary care doctors.
But some of those doctors say they would like other changes, and salary increases in particular.
Walls would not disclose the exact amount MGB is spending on Care Connect.
Bridge to Better Care or a ‘Band-Aid’?
MGB has rolled out other AI tools, including one that can transcribe a doctor’s in-person conversations with patients. Rao isn’t using that tool. She worries that patient information could be leaked and medical privacy violated, and she doesn’t want her conversations with patients to be used to help develop the next generation of AI medical tools.
“What if they’re just using my interactions with patients to train their AI and boot me out of my job?” she said.
That’s not the goal, said , a primary care physician who manages the program for MGB. All decisions about patient care are still made by real doctors, she said.
“We are not replacing our in-person primary care,” she said. “It’s still important, and the majority of patients still have in-person primary care.”
But the fear among some primary care doctors at MGB is that Care Connect will gradually erode access to in-person primary care visits. Of the $400 million pledged by MGB for primary care, they want less spent on AI and more used to attract and increase pay for primary care staffers.
, an MGB internist who is also involved in the unionizing effort, said the use of Care Connect can only fill a gap. “That sounds like a band-aid for a broken system to me,” he said.
Expanding AI Tools
As of mid-December, the Care Connect doctors were each seeing 40 to 50 patients a day. By February, the MGB network plans to make Care Connect available to all Massachusetts and New Hampshire residents who have health insurance, and to hire more doctors to staff the program as needed.
Patients can use the program like an urgent care service, Ireland said. They can also decide to make one of the remote doctors their permanent primary care provider.
“Some patients want in-person care,” Ireland said. “But I do believe there’s a subset of patients who will appreciate the 24-hour, seven-day-a-week model and choose to be a part of this.”
Care Connect isn’t for patients who need emergency care or a physical exam, she said. And patients who need tests or imaging are referred to the network’s clinics or labs.
But the remote doctors can manage some of the same routine issues that all primary care doctors do, Ireland said, including moderate respiratory infections, allergies, and chronic conditions such as diabetes, high cholesterol, and depression.
says only immediate, not ongoing, health problems should be on that list. Lin is chief of primary care at the Stanford University School of Medicine and founded Stanford’s Healthcare AI Applied Research Team.
“In its current state, the safest use of this tool is for more urgent care issues,” Lin said. “Your upper respiratory tract infections. Your urinary tract infections. Your musculoskeletal injuries. Your rashes.”
For patients with multiple chronic conditions such as high blood pressure and diabetes — or for patients with especially serious conditions like heart disease or cancer — Lin said nothing beats a human who sees you regularly.
Still, Lin agrees that the chat summary generated after an AI encounter can help a physician be more efficient. For patients, Lin understands the practical appeal of a virtual option.
“I would rather these patients get care, if that care can be safe,” he said, “than not get care at all.”
The company that developed the AI platform for Care Connect, , contends the program is delivering safe, effective care to patients with complex, chronic ailments — many of whom have no other option besides a hospital emergency room.
“America’s got a big problem with health care, issues with cost, quality, and access,” said , the company’s CEO. “To solve it, you need to start with primary care, and you have to use technology and AI.”
In addition to Mass General Brigham, K Health partners with five other health networks, including the highly ranked and Los Angeles-based .
In a funded by K Health, Cedars-Sinai researchers compared several hundred diagnosis and treatment recommendations made by AI with those made by physicians.
The researchers found the AI to be slightly better at identifying “critical red flags” and recommending care based on clinical guidelines, though the physicians were better at adjusting their treatment recommendations as they spoke more with the patient.
This article is from a partnership that includes , , and ºÚÁϳԹÏÍø News.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/news/ai-primary-care-doctors-shortages-massachusetts-mass-general-brigham/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2150222&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>