If you or someone you know may be experiencing a mental health crisis, contact the 988 Suicide & Crisis Lifeline by dialing or texting “988.”
Vince Lahey of Carefree, Arizona, embraces chatbots. From Big Tech products to “shady” ones, they offer “someone that I could share more secrets with than my therapist.”
He especially likes the apps for feedback and support, even though sometimes they berate him or lead him to fight with his ex-wife. “I feel more inclined to share more,” Lahey said. “I don’t care about their perception of me.”
There are a lot of people like Lahey.
Demand for mental health care has grown. Self-reported poor mental health days rose by 25% since the 1990s, analyzing survey data. According to the Centers for Disease Control and Prevention, suicide rates in 2022 that hadn’t been seen in nearly 80 years.
There are many patients who find a nonhuman therapist, powered by artificial intelligence, highly appealing — more appealing than a human with a reclining couch and stern manner. with begging for a therapist who’s “not on the clock,” who’s less judgmental, or who’s just less expensive.
Most people who need care don’t get it, said Tom Insel, former head of the National Institute of Mental Health, citing his former agency’s research. Of those who do, 40% receive “minimally acceptable care.”
“There’s a massive need for high-quality therapy,” he said. “We’re in a world in which the status quo is really crappy, to use a scientific term.”
Insel said engineers from OpenAI told him last fall that about 5% to 10% of the company’s then-roughly 800 million-strong user base rely on ChatGPT for mental health support.
Polling suggests these AI chatbots may be even more popular among young adults. A KFF poll found about 3 in 10 respondents ages 18 to 29 for mental or emotional health advice in the past year. Uninsured adults were about twice as likely as insured adults to report using AI tools. And nearly 60% of adult respondents who used a chatbot for mental health didn’t follow up with a flesh-and-blood professional.
The App Will Put You on the Couch
A burgeoning industry of apps offers AI therapists with human-like, often unrealistically attractive avatars serving as a sounding board for those experiencing anxiety, depression, and other conditions.
ºÚÁϳԹÏÍø News identified some 45 AI therapy apps in Apple’s App Store in March. While many charge steep prices for their services — one listed an annual plan for $690 — they’re still generally cheaper than talk therapy, which can cost hundreds of dollars an hour without insurance coverage.
On the App Store, “therapy” is often used as a marketing term, with small print noting the apps cannot diagnose or treat disease. One app, branded as OhSofia! AI Therapy Chat, had downloads in the six figures, said OhSofia! founder Anton Ilin in December.
“People are looking for therapy,” Ilin said. On one hand, the product’s name ; on the other, it warns in that it “does not provide medical advice, diagnosis, treatment, or crisis intervention and is not a substitute for professional healthcare services.” Executives don’t think that’s confusing, since there are disclaimers in the app.
The apps promise big results without backup. its users “immediate help during panic attacks.” it was “proven effective by researchers” and that it offers 2.3 times faster relief for anxiety and stress. (It doesn’t say what it’s faster than.)
There are few legislative or regulatory guardrails around how developers refer to their products — or even whether the products are safe or effective, said Vaile Wright, senior director of the office of health care innovation at the American Psychological Association. Even federal patient privacy protections don’t apply, she said.
“Therapy is not a legally protected term,” Wright said. “So, basically, anybody can say that they give therapy.”
Many of the apps “overrepresent themselves,” said John Torous, a psychiatrist and clinical informaticist at Beth Israel Deaconess Medical Center. “Deceiving people that they have received treatment when they really have not has many negative consequences,” including delaying actual care, he said.
States such as Nevada, Illinois, and California are trying to sort out the regulatory disarray, enacting laws forbidding apps from describing their chatbots as AI therapists.
“It’s a profession. People go to school. They get licensed to do it,” said Jovan Jackson, a Nevada legislator, who co-authored an enacted bill banning apps from referring to themselves as mental health professionals.
Underlying the hype, outside researchers and company representatives themselves have told the FDA and Congress that there’s little evidence supporting the efficacy of these products. What studies there are — and some companion-focused chatbots are “consistently poor” at managing crises.
“When it comes to chatbots, we don’t have any good evidence it works,” said Charlotte Blease, a professor at Sweden’s Uppsala University who specializes in trial design for digital health products.
The lack of “good quality” clinical trials stems from the FDA’s failure to provide recommendations about how to test the products, she said. “FDA is offering no rigorous advice on what the standards should be.”
Department of Health and Human Services spokesperson Emily Hilliard said, in response, that “patient safety is the FDA’s highest priority” and that AI-based products are subject to agency regulations requiring the demonstration of “reasonable assurance of safety and effectiveness before they can be marketed in the U.S.”
The Silver-Tongued Apps
Preston Roche, a psychiatry resident who’s , gets lots of questions about whether AI is a good therapist. After trying ChatGPT himself, he said he was “impressed” initially that it was able to use techniques to help him put negative thoughts “on trial.”
But Roche said after seeing posts on social media discussing people developing psychosis or being encouraged to make harmful decisions, he became disillusioned. The bots, he concluded, are sycophantic.
“When I look globally at the responsibilities of a therapist, it just completely fell on its face,” he said.
This sycophancy — the tendency of apps based on large language models to empathize, flatter, or delude their human conversation partner — is inherent to the app design, experts in digital health say.
“The models were developed to answer a question or prompt that you ask and to give you what you’re looking for,” said Insel, the former NIMH director, “and they’re really good at basically affirming what you feel and providing psychological support, like a good friend.”
That’s not what a good therapist does, though. “The point of psychotherapy is mostly to make you address the things that you have been avoiding,” he said.
While polling suggests many users are satisfied with what they’re getting out of ChatGPT and other apps, there have been about the service or encouragement to self-harm.
And or have been filed against OpenAI after ChatGPT users died by suicide or became hospitalized. In most of those cases, the plaintiffs allege they began using the apps for one purpose — like schoolwork — before confiding in them. These cases are being .
Google and the startup Character.ai — which has been funded by Google and has created “avatars” that adopt specific personas, like athletes, celebrities, study buddies, or therapists — are settling other wrongful-death lawsuits, .
OpenAI’s CEO, Sam Altman, has said up to may talk about suicide on ChatGPT.
“We have seen a problem where people that are in fragile psychiatric situations using a model like 4o can get into a worse one,” Altman said in a public question-and-answer session reported by , referring to a particular model of ChatGPT introduced in 2024. “I don’t think this is the last time we’ll face challenges like this with a model.”
An OpenAI spokesperson did not respond to requests for comment.
The company has said it on safeguards, such as referring users to 988, the national suicide hotline. However, the lawsuits against OpenAI argue existing safeguards aren’t good enough, and some research shows the problems are . OpenAI its own data suggesting the opposite.
OpenAI is , offering, early in one case, a variety of defenses ranging from denying that its product caused self-harm to alleging that the defendant misused the product by inducing it to discuss suicide. It has also said it’s working to .
Smaller apps also rely on OpenAI or other AI models to power their products, executives told ºÚÁϳԹÏÍø News. In interviews, startup founders and other experts said they worry that if a company simply imports those models into its own service, it might duplicate whatever safety flaws exist in the original product.
Data Risks
ºÚÁϳԹÏÍø News’ review of the App Store found listed age protections are minimal: Fifteen of the nearly four dozen apps say they could be downloaded by 4-year-old users; an additional 11 say they could be downloaded by those 12 and up.
Privacy standards are opaque. On the App Store, several apps are described as neither tracking personally identifiable data nor sharing it with advertisers — but on their company websites, privacy policies contained contrary descriptions, discussing the use of such data and their disclosure of information to advertisers, like AdMob.
In response to a request for comment, Apple spokesperson Adam Dema to the company’s App Store policies, which bar apps from using health data for advertising and require them to display information about how they use data in general. Dema did not respond to a request for further comment about how Apple enforces these policies.
Researchers and policy advocates said that sharing psychiatric data with social media firms means patients could be profiled. They could be targeted by dodgy treatment firms or charged different prices for goods based on their health.
ºÚÁϳԹÏÍø News contacted several app makers about these discrepancies; two that responded said their privacy policies had been put together in error and pledged to change them to reflect their stances against advertising. (A third, the team at OhSofia!, said simply that they don’t do advertising, though their app’s notes users “may opt out of marketing communications.”)
One executive told ºÚÁϳԹÏÍø News there’s business pressure to maintain access to the data.
“My general feeling is a subscription model is much, much better than any sort of advertising,” said Tim Rubin, the founder of Wellness AI, adding that he’d change the description in his app’s privacy policy.
One investor advised him not to swear off advertising, he said. “They’re like, essentially, that’s the most valuable thing about having an app like this, that data.”
“I think we’re still at the beginning of what’s going to be a revolution in how people seek psychological support and, even in some cases, therapy,” Insel said. “And my concern is that there’s just no framework for any of this.”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/mental-health/ai-chatbots-therapy-big-risks-few-regulations/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2228281&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>This year, executives from nearly every major health insurance company made the same declaration in calls with Wall Street analysts: Using artificial intelligence to make coverage decisions would help save them money.
Even the Trump administration is in managing the prior authorization process for the Medicare program, as well as seeking to override AI regulation by states.
But class action lawsuits have accused insurers of using AI to wrongfully withhold treatment. And outlines the risks of training AI on a current system rife with wrongful denials.
“There is a world in which using AI could make that worse, or at least replicate a bad human system, because the data that it would be training on is from that bad human system,” said Michelle Mello, a co-author of the study.
Although, Mello said, the research team found “real positives alongside the risks.”
In this video produced by ºÚÁϳԹÏÍø News’ Hannah Norman, Darius Tahir, a correspondent covering health technology, explains.
You can read Tahir’s recent coverage of AI’s use by health insurers below:
This <a target="_blank" href="/courts/watch-ai-artificial-intelligence-prior-authorization-insurance-coverage-decisions/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2181021&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>It’s a bold-sounding promise, and a familiar one; politicians from both parties have been repeating it for years now. Both Trump administrations — and the Biden administration in between — have taken whacks at making medical prices more accessible, with the goal of empowering patients to shop for better deals.
The idea makes intuitive sense. Why shouldn’t you be able to compare the prices of MRI scans, for instance?
The feds have made some strides. Prices are available, albeit in confusing or fragmentary form. But there’s one big problem: “There’s no evidence that patients use this information,” said Zack Cooper, a health economist at Yale University.
Health care is an inherently complicated marketplace. For one thing, it’s not as simple as one price for one medical stay. Two babies might be delivered by the same obstetrician, for example, but the mothers could be charged very different amounts. One patient might be given medications to speed up contractions; another might not. Or one might need an emergency cesarean section — one of many cases in medicine in which obtaining the service simply isn’t a choice. Plus, the same hospital typically has different contract terms with each insurer, making comparing prices even more difficult for patients.
Instead of helping consumers sort things out, this federally mandated price data largely has become a tool for providers and insurers, looking for intel about their competitors — so they can use it at the negotiating table in a quest for more advantageous rates.
“We use the transparency data,” said Eric Hoag, an executive at Blue Cross Blue Shield of Minnesota, noting that the insurer wants to make sure health care providers aren’t being paid substantially different rates. It’s “to make sure that we are competitive, or, you know, more than competitive against other health plans.”
For all those tugs-of-war, it’s not clear these policies have had much of an effect overall. Research shows that transparency policies can have mixed effects on prices, with of a New York initiative finding a marginal increase in billed charges.
Price isn’t the only piece of information negotiations hinge on. Hoag said Blue Cross Blue Shield of Minnesota also considers quality of care, rates of unnecessary treatments, and other factors. And sometimes negotiators feel they keep up with their peers — claiming a need for more revenue to match competitors’ salaries, for example.
Hoag said doctors and other care providers often look at the data from comparable health systems and say, “‘I need to be paid more.’”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-care-costs/the-week-in-brief-hospital-price-transparency-tools/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2159544&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>Regulating artificial intelligence, especially its use by health insurers, is becoming a politically divisive topic, and it’s scrambling traditional partisan lines.
Boosters, led by Trump, are not only pushing its integration into government, as in in prior authorization, but also trying to stop others from building curbs and guardrails. A December seeks to preempt most state efforts to govern AI, describing “a race with adversaries for supremacy” in a new “technological revolution.”
“To win, United States AI companies must be free to innovate without cumbersome regulation,” Trump’s order said. “But excessive State regulation thwarts this imperative.”
Across the nation, states are in revolt. At least four — Arizona, Maryland, Nebraska, and Texas — enacted legislation last year reining in the use of AI in health insurance. Two others, Illinois and California, enacted bills the year before.
Legislators in Rhode Island plan to try again this year after a bill requiring regulators to collect data on technology use failed to clear both chambers last year. A bill in North Carolina requiring insurers not to use AI as the sole basis of a coverage decision attracted significant interest from Republican legislators last year.
DeSantis, a former GOP presidential candidate, has rolled out an “AI Bill of Rights,” include restrictions on its use in processing insurance claims and a requirement allowing a state regulatory body to inspect algorithms.
“We have a responsibility to ensure that new technologies develop in ways that are moral and ethical, in ways that reinforce our American values, not in ways that erode them,” DeSantis said during his State of the State address in January.
Ripe for Regulation
Polling shows Americans are skeptical of AI. A from Fox News found 63% of voters describe themselves as “very” or “extremely” concerned about artificial intelligence, including majorities across the political spectrum. Nearly two-thirds of Democrats and just over 3 in 5 Republicans said they had qualms about AI.
Health insurers’ tactics to hold down costs also trouble the public; from KFF found widespread discontent over issues like prior authorization. (KFF is a health information nonprofit that includes ºÚÁϳԹÏÍø News.) Reporting and in recent years has highlighted the use of algorithms to rapidly deny insurance claims or prior authorization requests, apparently with little review by a doctor.
Last month, the House Ways and Means Committee hauled in executives from Cigna, UnitedHealth Group, and other major health insurers to address concerns about affordability. When pressed, the executives either denied or avoided talking about using the most advanced technology to reject authorization requests or toss out claims.
AI is “never used for a denial,” Cigna CEO David Cordani told lawmakers. Like others in the health insurance industry, the company is being sued for its methods of denying claims, as spotlighted by ProPublica. Cigna spokesperson Justine Sessions said the company’s claims-denial process “is not powered by AI.”
Indeed, companies are at pains to frame AI as a loyal servant. Optum, part of health giant UnitedHealth Group, announced Feb. 4 that it was rolling out tech-powered prior authorization, with plenty of mentions of speedier approvals.
“We’re transforming the prior authorization process to address the friction it causes,” John Kontor, a senior vice president at Optum,
Still, Alex Bores, a computer scientist and New York Assembly member prominent in the state’s legislative debate over AI, which culminated in a comprehensive bill governing the technology, said AI is a natural field to regulate.
“So many people already find the answers that they’re getting from their insurance companies to be inscrutable,” said Bores, a Democrat who is running for Congress. “Adding in a layer that cannot by its nature explain itself doesn’t seem like it’ll be helpful there.”
At least some people in medicine — doctors, for example — are cheering legislators and regulators on. The American Medical Association “supports state regulations seeking greater accountability and transparency from commercial health insurers that use AI and machine learning tools to review prior authorization requests,” said John Whyte, the organization’s CEO.
Whyte said insurers already use AI and “doctors still face delayed patient care, opaque insurer decisions, inconsistent authorization rules, and crushing administrative work.”
Insurers Push Back
With legislation approved or pending in at least nine states, it’s unclear how much of an effect the state laws will have, said University of Minnesota law professor Daniel Schwarcz. States can’t regulate “self-insured” plans, which are used by many employers; only the federal government has that power.
But there are deeper issues, Schwarcz said: Most of the state legislation he’s seen would require a human to sign off on any decision proposed by AI but doesn’t specify what that means.
The laws don’t offer a clear framework for understanding how much review is enough, and over time humans tend to become a little lazy and simply sign off on any suggestions by a computer, he said.
Still, insurers view the spate of bills as a problem. “Broadly speaking, regulatory burden is real,” said Dan Jones, senior vice president for federal affairs at the Alliance of Community Health Plans, a trade group for some nonprofit health insurers. If insurers spend more time working through a patchwork of state and federal laws, he continued, that means “less time that can be spent and invested into what we’re intended to be doing, which is focusing on making sure that patients are getting the right access to care.”
Linda Ujifusa, a Democratic state senator in Rhode Island, said insurers came out last year against the bill she sponsored to restrict AI use in coverage denials. It passed in one chamber, though not the other.
“There’s tremendous opposition” to anything that regulates , she said, and “tremendous opposition” to identifying intermediaries such as private insurers or pharmacy benefit managers “as a problem.”
In a , AHIP, an insurer trade group, advocated for “balanced policies that promote innovation while protecting patients.”
“Health plans recognize that AI has the potential to drive better health care outcomes — enhancing patient experience, closing gaps in care, accelerating innovation, and reducing administrative burden and costs to improve the focus on patient care,” Chris Bond, an AHIP spokesperson, told ºÚÁϳԹÏÍø News. And, he continued, they need a “consistent, national approach anchored in a comprehensive federal AI policy framework.”
Seeking Balance
In California, Newsom has signed some laws regulating AI, including one requiring health insurers to ensure their algorithms are fairly and equitably applied. But the Democratic governor has vetoed others with a broader approach, such as a bill including more mandates about how the technology must work and requirements to disclose its use to regulators, clinicians, and patients upon request.
Chris Micheli, a Sacramento-based lobbyist, said the governor likely wants to ensure the state budget — consistently powered by outsize stock market gains, especially from tech companies — stays flush. That necessitates balance.
Newsom is trying to “ensure that financial spigot continues, and at the same time ensure that there are some protections for California consumers,” he said. He added insurers believe they’re subject to a welter of regulations already.
The Trump administration seems persuaded. The president’s recent executive order proposed to sue and restrict certain federal funding for any state that enacts what it characterized as “excessive” state regulation — with some exceptions, including for policies that protect children.
That order is possibly unconstitutional, said Carmel Shachar, a health policy scholar at Harvard Law School. The source of preemption authority is generally Congress, she said, and federal lawmakers twice took up, but ultimately declined to pass, a provision barring states from regulating AI.
“Based on our previous understanding of federalism and the balance of powers between Congress and the executive, a challenge here would be very likely to succeed,” Shachar said.
Some lawmakers view Trump’s order skeptically at best, noting the administration has been removing guardrails, and preventing others from erecting them, to an extreme degree.
“There isn’t really a question of, should it be federal or should it be state right now?” Bores said. “The question is, should it be state or not at all?”
Do you have an experience navigating prior authorization to get medical treatment that you’d like to share with us for our reporting? .
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/insurance/artificial-intelligence-ai-health-insurance-companies-state-regulation-trump/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2154202&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The idea echoes a policy implemented during his first term, when Trump suggested that requiring hospitals to post their charges online could ease one of the most common gripes about the health care system — the lack of upfront prices. To anyone who’s three months after treatment only to find mysterious charges, the idea seemed intuitive.
“You’re able to go online and compare all of the hospitals and the doctors and the prices,” Trump said in 2019 at an event unveiling the price transparency policy.
But amid low compliance and other struggles in implementing the policy since it took effect in 2021, the available price data is sparse and often confusing. And instead of patients shopping for medical services, it’s mostly health systems and insurers using the little data there is, turning it into fodder for negotiations that determine what medical professionals and facilities get paid for what services.
“We use the transparency data,” said Eric Hoag, an executive at Blue Cross Blue Shield of Minnesota, noting that the insurer wants to make sure providers aren’t being paid substantially different rates. It’s “to make sure that we are competitive, or, you know, more than competitive against other health plans.”
Not all hospitals have fallen in line with the price transparency rules, and many were slow to do so. conducted in the policy’s first 10 months found only about a third of facilities had complied with the regulations. The federal Centers for Medicare & Medicaid Services from June 2022 to May 2025 that they would be fined for lack of compliance with the rules.
The struggles to make health care prices available have prompted more federal action since Trump’s first effort. President Joe Biden took his own thwack at the dilemma, by requiring and toughening compliance criteria. And in early 2025, working to fulfill his promises to lower health costs, Trump tried again, signing a new executive order urging his administration to fine hospitals and doctors for failing to post their prices. CMS followed up with a regulation intended to up the fines and increase the level of detail required within the pricing data.
So far, “there’s no evidence that patients use this information,” said Zack Cooper, a health economist at Yale University.
In 2021, Cooper co-authored based on data from a large commercial insurer. The researchers found that, on average, patients who need an MRI pass six lower-priced imaging providers on the way from their homes to an appointment for a scan. That’s because they follow their physician’s advice about where to receive care, the study showed.
Executives and researchers interviewed by ºÚÁϳԹÏÍø News also didn’t think opening the data would change prices in a big way. Research shows that transparency policies can have mixed effects on prices, with of a New York initiative finding a marginal increase in billed charges.
The policy results thus far seem to put a damper on long-held hopes, particularly from the GOP, that providing more price transparency would incentivize patients to find the best deal on their imaging or knee replacements.
These aspirations have been unfulfilled for a few reasons, researchers and industry insiders say. Some patients simply don’t compare services. But unlike with apples — a Honeycrisp and a Red Delicious are easy to line up side by side — medical services are hard to compare.
For one thing, it’s not as simple as one price for one medical stay. Two babies might be delivered by the same obstetrician, for example, but the mothers could be charged very different amounts. One patient might be given medications to speed up contractions; another might not. Or one might need an emergency cesarean section — one of many cases in medicine in which obtaining the service simply isn’t a choice.
And the data often is presented in a way that’s not useful for patients, sometimes buried in spreadsheets and requiring a deep knowledge of billing codes. In computing these costs, hospitals make “detailed assumptions about how to apply complex contracting terms and assess historic data to create a reasonable value for an expected allowed amount,” the American Hospital Association in July 2025 amid efforts to boost transparency.
Costs vary because hospitals’ contracts with insurers vary, said Jamie Cleverley, president of Cleverley and Associates, which works with health care providers to help them understand the financial impacts of changing contract terms. The cost for a patient with one health plan may be very different than the cost for the next patient with another plan.
The fact that hospital prices might be confusing for patients is a consequence of the lack of standardization in contracts and presentation, Cleverley said. “They’re not being nefarious.”
“Until we kind of align as an industry, there’s going to continue to be this variation in terms of how people look at the data and the utility of it,” he said.
Instead of aiding shoppers, the federally mandated data has become the foundation for negotiations — — over the proper level of compensation.
The top use for the pricing data for health care providers and payers, such as insurers, is “to use that in their contract negotiations,” said Marcus Dorstel, an executive at price transparency startup Turquoise Health.
Turquoise Health assembles price data by grouping codes for services together using machine learning, a type of artificial intelligence. It is just one example in a cottage industry of startups offering insights into prices. And, online, the startups’ advertisements hawking their wares often focus on hospitals and their periodic jousts with insurers. Turquoise has payers and providers as clients, Dorstel said.
“I think nine times out of 10 you will hear them say that the price transparency data is a vital piece of the contract negotiation now,” he said.
Of course, prices aren’t the only variable that negotiations hinge on. Hoag said Blue Cross Blue Shield of Minnesota also considers quality of care, rates of unnecessary treatments, and other factors. And sometimes negotiators feel as if they have to keep up with their peers — claiming a need for more revenue to match competitors’ salaries, for example.
Hoag said doctors and other providers often look at the data from comparable health systems and say, “‘I need to be paid more.’”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/price-transparency-trump-hospitals-insurers-health-care-costs/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2152333&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>“I don’t think people should be taking medical advice from me,” Kennedy Democratic congressman in May.
Kennedy once expressed different views — for example, about the need to proselytize about exercise. As he , he wants to use the “bully pulpit” to “obliterate the delicacy” with which Americans discuss fitness and explain that “suffering” is virtuous.
“We need to establish an ethic that you’re not a good parent unless your kids are doing some kind of physical activity,” Kennedy told the podcaster in September 2024.
The Department of Health and Human Services is tasked with communicating information to protect and improve the health and well-being of every American. It provides reminders about vaccinations and screenings; alerts about which food is unsafe; and useful, everyday tips about subjects such as sunscreen and, yes, exercise.
Under Kennedy’s watch, though, HHS has compromised once-fruitful campaigns promoting immunizations and other preventive health measures. On Instagram, the agency often emphasizes Kennedy’s personal causes, his pet projects, or even the secretary himself. Former agency employees say communications have a more political edge, with “Make America Healthy Again” frequently featured in press releases.
Interviews with over 20 former and current agency employees provide a look inside a health department where personality and politics steer what is said to the public. ºÚÁϳԹÏÍø News granted many of these people anonymity because they fear retribution.
One sign of change is what is no longer, or soon will not be, amplified — for instance, acclaimed anti-smoking campaigns making a dent in one of Kennedy’s priorities, chronic disease.
Another sign is what gets celebrated. On the official HHS Instagram account this year, out were posts saluting Juneteenth and Father’s Day. In, under Kennedy, were posts and .
Commenting on such changes, HHS spokesperson Andrew Nixon said in an email that “DEI is gone, thanks to the Trump administration.”
Some elected officials are pointedly not promoting Kennedy as a source of health care information. Regarding the secretary’s announcement citing unproven links between Tylenol and autism, Senate Majority Leader John Thune told MSNBC that, “if I were a woman, I’d be talking to my doctor and not taking, you know, advice from RFK or any other government bureaucrat, for that matter.” (Thune’s office did not respond to a request for comment.)
At since January show trust in Kennedy as a medical adviser is low. , from The Economist and YouGov, barely over a quarter of respondents said they trusted Kennedy “a lot” or “somewhat.”
The department’s online messaging looks “a lot more like propaganda than it does public health,” said Kevin Griffis, who worked in communications at the CDC under President Joe Biden .
Transition to a New Administration
The new administration inaugurated dramatic changes. Upon arrival, political appointees froze the health agency’s outside communications on a broader scale than in previous changeovers, halting everything from routine webpage updates to meetings with grant recipients. The pause created logistical snafus: For example, one CDC employee described being forced to cancel, and later rebook, advertising campaigns — at greater cost to taxpayers.
Even before the gag order was lifted in the spring, the tone and direction of HHS’ public communications had shifted.
According to data shared by iSpot.tv, a market research firm that tracks television advertising, at least four HHS ads about vaccines ended within two weeks of Trump’s inauguration.
“Flu campaigns were halted,” during a season in which a died from influenza, Deb Houry, who had resigned as the CDC’s chief medical officer, said in a Sept. 17 congressional hearing.
Instead of urging people to get vaccinated, HHS officials contemplated more-ambivalent messaging, said Griffis, then the CDC’s director of communications. According to Griffis, other former agency employees, and communications reviewed by ºÚÁϳԹÏÍø News, Nixon contemplated a campaign that would put more emphasis on vaccine risks. It would “be promoting, quote-unquote, ‘informed choice,’” Griffis said.
Nixon called the claim “categorically false.” Still, the department continues to push anti-vaccine messaging. In November, the CDC to assert the false claim that vaccines may cause autism.
Messaging related to tobacco control has been pulled back, according to Brian King, an executive at the Campaign for Tobacco-Free Kids, as well as multiple current and former CDC employees. Layoffs, administrative leaves, and funding turmoil have drained offices at the CDC and the FDA focused on educating people about the risks of smoking and vaping, King said.
Four current and former CDC employees told ºÚÁϳԹÏÍø News that “Tips From Former Smokers,” a campaign credited with helping approximately a million people quit smoking, is in danger. Ordinarily, a contract for the next year’s campaign would have been signed by now. But, as of Nov. 21, there was no contractor, the current and former employees said.
Nixon did not respond to a question from ºÚÁϳԹÏÍø News regarding plans for the program.
“We’re currently in an apocalypse for national tobacco education campaigns in this country,” King said.
Kennedy’s HHS has a different focus for its education campaigns, including the “Take Back Your Health” campaign, for which the department this year to produce “viral” and “edgy” content to urge Americans to exercise.
An earlier version of the campaign’s solicitation asked for partners to boost wearables, such as gadgets that track steps or glucose levels — reflecting a for every American to be wearing such a device within four years.
The source of funds for the exercise campaign? In the spring, leadership of multiple agencies discussed using funding for the CDC’s Tips From Former Smokers campaign, employees from those agencies said. By the fall, the smoking program hadn’t spent all its funds, the current and former CDC employees said.
Nixon did not respond to questions about the source of funding for the exercise campaign.
Food Fight
At the FDA, former employees said they noticed new types of political interference as Trump officials took the reins, sometimes making subtle tweaks to public communications, sometimes changing wholesale what messages went out. The interventions into messaging — what was said, but also what went unsaid — proved problematic, they said.
Early this year, multiple employees told ºÚÁϳԹÏÍø News, Nixon gave agency employees a quick deadline to gather a list of all policy initiatives underway on infant formula. That was then branded “,” as if it were a new push by a new administration.
Marianna Naum, a former acting director of external communications and consumer education at the FDA, said she supports parts of the Trump administration’s agenda. But she said she disagreed with how it handled Operation Stork Speed. “It felt like they were trying to put out information so they can say: ‘Look at the great work. Look how fast we did it,’” she said.
Nixon called the account “false” without elaborating. ºÚÁϳԹÏÍø News spoke with three other employees with the same recollections of the origins of Operation Stork Speed.
“Things that didn’t fit within their agenda, they were downplayed,” Naum said.
For example, she said, Trump political appointees resisted a proposed press release noting agency approval of cell-cultured pork — that is, pork grown in a lab. Similar products have raised the ire of ranchers and farmers working in typically GOP-friendly industries. States such as Florida have .
The agency ultimately issued . But a review of the agency’s archives showed it hasn’t put out press releases about two later approvals of cell-cultured meat.
Wide-ranging layoffs have also hit the FDA’s food office hard, leaving fewer people to make sure news gets distributed properly and promptly. Former employees say notices about recalled foods aren’t circulated as widely as they used to be, meaning fewer eyeballs on alerts about contaminated , , and the like.
Nixon did not respond to questions about changes in food recalls. Overall, Nixon answered nine of 53 questions posed by ºÚÁϳԹÏÍø News.
Pushing Politics
Televised HHS public service campaigns earned nearly 7.3 billion fewer impressions in the first half of 2025 versus the same period in 2022, according to iSpot data, with the drop being concentrated in pro-vaccine messaging. Other types of ads, such as those covering substance use and mental health, also fell. Data from the marketing intelligence firm Sensor Tower shows similar drops in HHS ad spending online.
With many of the longtime professionals laid off and new political appointees in place atop the hierarchy, a new communications strategy — bearing the hallmarks of Kennedy’s personality — is being built, said the current and former HHS employees, plus public health officials interviewed by ºÚÁϳԹÏÍø News.
Whereas in 2024, the agency would mostly post public health resources such as the 988 suicide hotline on its Instagram page, its feed in 2025 features more of the health secretary himself. Through the end of August, according to a ºÚÁϳԹÏÍø News review, 77 of its 101 posts featured Kennedy — often fishing, biking, or doing pullups, as well as pitching his policies.
By contrast, only 146 of the agency’s 754 posts last year, or about 20%, featured Xavier Becerra, Kennedy’s predecessor.
In 2024, on Instagram, the agency promoted Medicare and individual insurance open enrollment; in 2025, the agency has not.
In 2024, the agency’s Instagram feed included some politicking as Biden ran for reelection, but the posts were less frequent and often indirect — for instance, touting a policy enacted under Biden’s signature legislation, the Inflation Reduction Act, but without mentioning the name of the bill or its connection to the president.
In 2025, sloganeering is a frequent feature of the agency’s Kennedy-era Instagram. Through the end of August, “Make America Healthy Again” or variants of the catchphrase featured in at least 48% of posts.
Amid the layoffs, the agency made a notable addition to its team. It hired a state legislative spokesperson as a “rapid response” coordinator, a role that employees from previous administrations couldn’t recall previously existing at HHS.
“Like other Trump administration agencies, HHS is continuously rebutting fake news for the benefit of the public,” Nixon said when asked about the role.
On the day Houry and Susan Monarez, the CDC leader ousted in late August, testified before senators about Kennedy’s leadership, the agency’s X feed posted clips belittling the former officials. The department also derisively rebuts unfavorable news coverage.
“It’s very interesting to watch the memeification of the United States and critical global health infrastructure,” said McKenzie Wilson, an HHS spokesperson under Biden. “The entire purpose of this agency is to inform the public about safety, emergencies as they happen.”
‘Clear, Powerful Messages From Bobby’
Kennedy’s , released in September, proposes public awareness campaigns on subjects such as illegal vaping and fluoride levels in water, while reassuring Americans that the regulatory system for pesticides is “robust.”
Those priorities reflect — and are amplified by — cadres of activists outside government. Since the summer, HHS officials have appeared on Zoom calls with aligned advocacy groups, trying to drum up support for Kennedy’s agenda.
— on which, according to host Tony Lyons, activists “representing over 250 million followers on social media” were registered — famous names such as motivational speaker Tony Robbins gave pep talks about how to influence elected officials and the public.
“Each week, you’re gonna get clear, powerful messages from Bobby, from HHS, from their team,” Robbins said. “And your mission is to amplify it, to make it your own, to speak from your soul, to be bold, to be relentless, to be loving, to be loud, you know, because this is how we make the change.”
The communications strategy captivates the public, but it also confuses it.
Anne Zink, formerly the chief medical officer for Alaska, said she thought Kennedy’s messaging was some of the catchiest of any HHS director.
But, she said, in her work as an emergency physician, she’s seen the consequences of his health department’s policies on her puzzled patients. Patients question vaccines. Children show up with gastrointestinal symptoms Zink says she suspects are related to raw milk consumption.
“I increasingly see people say, ‘I just don’t know what to trust, because I just hear all sorts of things out there,’” she said.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/robert-kennedy-rfk-maha-hhs-cdc-social-media-vaccines-tobacco/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2122845&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The pilot program, designed to weed out wasteful, “low-value” services, amounts to a federal expansion of an unpopular process called prior authorization, which requires patients or someone on their medical team to seek insurance approval before proceeding with certain procedures, tests, and prescriptions. It will affect Medicare patients, and the doctors and hospitals who care for them, in Arizona, Ohio, Oklahoma, New Jersey, Texas, and Washington, starting Jan. 1 and running through 2031.
The move has raised eyebrows among politicians and policy experts. The traditional version of Medicare, which covers adults 65 and older and some people with disabilities, has mostly eschewed prior authorization. Still, it is widely used by private insurers, especially in the Medicare Advantage market.
And the timing was surprising: The pilot was , just days after the Trump administration unveiled a voluntary effort by private health insurers to revamp and reduce their own use of prior authorization, which causes care to be “significantly delayed,” said Mehmet Oz, administrator of the Centers for Medicare & Medicaid Services.
“It erodes public trust in the health care system,” Oz told the media. “It’s something that we can’t tolerate in this administration.”
But some critics, like Vinay Rathi, an Ohio State University doctor and policy researcher, have accused the Trump administration of sending mixed messages.
On one hand, the federal government wants to borrow cost-cutting measures used by private insurance, he said. “On the other, it slaps them on the wrist.”
Administration officials are “talking out of both sides of their mouth,” said Rep. Suzan DelBene, a Washington Democrat. “It’s hugely concerning.”
Patients, doctors, and other lawmakers have also been critical of what they see as delay-or-deny tactics, which can slow down or block access to care, causing irreparable harm and even death.
“Insurance companies have put it in their mantra that they will take patients’ money and then do their damnedest to deny giving it to the people who deliver care,” said Rep. Greg Murphy, a North Carolina Republican and a urologist. “That goes on in every insurance company boardroom.”
Insurers have long argued that prior authorization reduces fraud and wasteful spending, as well as prevents potential harm. Public displeasure with insurance denials dominated the news in December, when the shooting death of UnitedHealthcare’s CEO led many to anoint his alleged killer as a folk hero.
And the public broadly dislikes the practice: Nearly three-quarters of respondents thought prior authorization was a “major” problem in , a health information nonprofit that includes ºÚÁϳԹÏÍø News.
Indeed, Oz said during his June press conference that “violence in the streets” prompted the Trump administration to take on the issue of prior authorization reform in the private insurance industry.
Still, the administration is expanding the use of prior authorization in Medicare. CMS spokesperson Alexx Pons said both initiatives “serve the same goal of protecting patients and Medicare dollars.”
Unanswered Questions
The , WISeR — short for “Wasteful and Inappropriate Service Reduction” — will test the use of an AI algorithm in making prior authorization decisions for some Medicare services, including skin and tissue substitutes, electrical nerve stimulator implants, and knee arthroscopy.
The federal government says such procedures are particularly vulnerable to “fraud, waste, and abuse” and could be held in check by prior authorization.
Other procedures may be added to the list. But services that are inpatient-only, emergency, or “would pose a substantial risk to patients if significantly delayed” would not be subject to the AI model’s assessment, according to the federal announcement.
While the use of AI in health insurance isn’t new, Medicare has been slow to adopt the private-sector tools. Medicare has historically used prior authorization in a limited way, with contractors who aren’t incentivized to deny services. But experts who have studied the plan believe the federal pilot could change that.
Pons told ºÚÁϳԹÏÍø News that no Medicare request will be denied before being reviewed by a “qualified human clinician,” and that vendors “are prohibited from compensation arrangements tied to denial rates.” While the government says vendors will be rewarded for savings, Pons said multiple safeguards will “remove any incentive to deny medically appropriate care.”
“Shared savings arrangements mean that vendors financially benefit when less care is delivered,” a structure that can create a powerful incentive for companies to deny medically necessary care, said Jennifer Brackeen, senior director of government affairs for the Washington State Hospital Association.
And doctors and policy experts say that’s only one concern.
Rathi said the plan “is not fully fleshed out” and relies on “messy and subjective” measures. The model, he said, ultimately depends on contractors to assess their own results, a choice that makes the results potentially suspect.
“I’m not sure they know, even, how they’re going to figure out whether this is helping or hurting patients,” he said.
Pons said the use of AI in the Medicare pilot will be “subject to strict oversight to ensure transparency, accountability, and alignment with Medicare rules and patient protection.”
“CMS remains committed to ensuring that automated tools support, not replace, clinically sound decision-making,” he said.
Experts agree that AI is theoretically capable of expediting what has been a cumbersome process marked by delays and denials that can harm patients’ health. Health insurers have argued that AI eliminates human error and bias and will save the health care system money. These companies have also insisted that humans, not computers, are ultimately reviewing coverage decisions.
But some scholars are doubtful that’s routinely happening.
“I think that there’s also probably a little bit of ambiguity over what constitutes ‘meaningful human review,’” said Amy Killelea, an assistant research professor at the Center on Health Insurance Reforms at Georgetown University.
A 2023 found that, over a two-month period, doctors at Cigna who reviewed requests for payment spent an average of only 1.2 seconds on each case.
Cigna spokesperson Justine Sessions told ºÚÁϳԹÏÍø News that the company does not use AI to deny care or claims. The ProPublica investigation referenced a “simple software-driven process that helped accelerate payments to clinicians for common, relatively low-cost tests and treatments, and it is not powered by AI,” Sessions said. “It was not used for prior authorizations.”
And yet class-action lawsuits filed against major health insurers have alleged that flawed AI models undermine doctor recommendations and fail to take patients’ unique needs into account, forcing some people to shoulder the financial burden of their care.
Meanwhile, a by the American Medical Association in February found that 61% think AI is “increasing prior authorization denials, exacerbating avoidable patient harms and escalating unnecessary waste now and into the future.”
Chris Bond, a spokesperson for the insurers’ trade group AHIP, told ºÚÁϳԹÏÍø News that the organization is “zeroed in” on implementing the commitments made to the government. Those include reducing the scope of prior authorization and making sure that communications with patients about denials and appeals are easy to understand.
‘This Is a Pilot’
The Medicare pilot program underscores ongoing concerns about prior authorization and raises new ones.
While private health insurers have been opaque about how they use AI and the extent to which they use prior authorization, policy researchers believe these algorithms are often programmed to automatically deny high-cost care.
“The more expensive it is, the more likely it is to be denied,” said Jennifer Oliva, a professor at the Maurer School of Law at Indiana University-Bloomington, whose work focuses on AI regulation and health coverage.
Oliva explained in a recent that when a patient is expected to die within a few years, health insurers are “motivated to rely on the algorithm.” As time passes and the patient or their provider is forced to appeal a denial, the chance of the patient dying during that process increases. The longer an appeal, the less likely the health insurer is to pay the claim, Oliva said.
“The No. 1 thing to do is make it very, very difficult for people to get high-cost services,” she said.
As the use of AI by health insurers is poised to grow, insurance company algorithms amount to a “regulatory blind spot” and demand more scrutiny, said Carmel Shachar, a faculty director at Harvard Law School’s Center for Health Law and Policy Innovation.
The WISeR pilot is “an interesting step” toward using AI to ensure that Medicare dollars are purchasing high-quality health care, she said. But the lack of details makes it difficult to determine whether it will work.
Politicians are grappling with some of the same questions.
“How is this being tested in the first place? How are you going to make sure that it is working and not denying care or producing higher rates of care denial?” asked DelBene, who to Oz with other Democrats demanding answers about the AI program. But Democrats aren’t the only ones worried.
Murphy, who co-chairs the House GOP Doctors Caucus, acknowledged that many physicians are concerned the WISeR pilot could overreach into their practice of medicine if the AI algorithm denies doctor-recommended care.
Meanwhile, House members of both parties recently supported a , a Florida Democrat, to block funding for the pilot in the fiscal 2026 budget of the Department of Health and Human Services.
AI in health care is here to stay, Murphy said, but it remains to be seen whether the WISeR pilot will save Medicare money or contribute to the problems already posed by prior authorization.
“This is a pilot, and I’m open to see what’s going to happen with this,” Murphy said, “but I will always, always err on the side that doctors know what’s best for their patients.”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/aging/ai-medicare-prior-authorization-trump-pilot-program-wiser/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2091468&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>McGing, calling on behalf of his son, had an in-the-weeds question: how to prevent overpayments that the federal government might later claw back. His call was intercepted by an artificial intelligence-powered chatbot.
No matter what he said, the bot parroted canned answers to generic questions, not McGing’s obscure query. “If you do a key press, it didn’t do anything,” he said. Eventually, the bot “glitched or whatever” and got him to an agent.
It was a small but revealing incident. Unbeknownst to McGing, a former Social Security employee in Maryland, he had encountered a technological tool recently introduced by the agency. Former officials and longtime observers of the agency say the Trump administration rolled out a product that was tested but deemed not yet ready during the Biden administration.
“With the new administration, they’re just kind of like, let’s go fast and fix it later, which I don’t agree with, because you are going to generate a lot of confusion,” said Marcela Escobar-Alava, who served as Social Security’s chief information officer under President Joe Biden.
Some 74 million people ; 11 million of those receive disability payments. In a , more than a third of recipients said they wouldn’t be able to afford such necessities as food, clothing, or housing without it. And yet the agency has been shedding the employees who serve them: Some 6,200 have left the agency, its commissioner , and critics in Congress and elsewhere say that’s led to worse customer service, despite the agency’s efforts to build up new technology.
Take the new phone bot. At least some beneficiaries don’t like it: Social Security’s is, from time to time, pockmarked with negative reviews of the uncooperative bot, as the agency said in July that are handled by the bot.
Lawmakers and former agency employees worry it foreshadows a less human Social Security, in which rushed-out AI takes the place of pushed-out, experienced employees.
Anxieties Across Party Lines
Concern over the direction of the agency is bipartisan. In May, a group of House Republicans expressing support for government efficiency, but cautioning that their constituents had criticized the agency for “inadequate customer service” and suggesting that some measures may be “overly burdensome.”
The agency’s commissioner, Frank Bisignano, a former Wall Street executive, is a tech enthusiast. He has a laundry list of initiatives on which to spend the $600 million in new tech money in the Trump administration’s fiscal 2026 budget request. He’s gotten testy when asked whether his plans mean he’ll be replacing human staff with AI.
“You referred to SSA being on an all-time staffing low; it’s also at an all-time technological high,” he snapped at one Democrat in a House hearing in late June.
But former Social Security officials are more ambivalent. In interviews with ºÚÁϳԹÏÍø News, people who left the agency — some speaking on the condition of anonymity for fear of retribution from the Trump administration and its supporters — said they believe the new administration simply rushed out technologies developed, but deemed not yet ready, by the Biden administration. They also said the agency’s firing of thousands of employees resulted in the loss of experienced technologists who are best equipped to roll out these initiatives and address their weaknesses.
“Social Security’s new AI phone tool is making it even harder for people to get help over the phone — and near impossible if someone needs an American Sign Language interpreter or translator,” Sen. Elizabeth Warren (D-Mass.) told ºÚÁϳԹÏÍø News. “We should be making it as easy as possible for people to get the Social Security they’ve earned.”
Spokespeople for the agency did not reply to questions from ºÚÁϳԹÏÍø News.
Using AI to automate customer service is one of the buzziest businesses in Silicon Valley. In theory, the new breed of artificial intelligence technologies can smoothly respond, in a human-like voice, to just about any question. That’s not how the Social Security Administration’s bot seems to work, with users reporting canned, unrelated responses.
The Trump administration has eliminated some online statistics that obscure its true performance, said Kathleen Romig, a former agency official who is now director of Social Security and disability policy at the left-leaning Center on Budget and Policy Priorities. The old website showed that most callers waited two hours for an answer. Now, the website doesn’t show waiting times, either for phone inquiries (once callback wait time is accounted for) or appointment scheduling.
While statistics are being posted that show beneficiaries receive help — that is, using the AI bot or the agency’s website to accomplish tasks like getting a replacement card — Romig said she thinks it’s a “very distorted view” overall. Reviews of the AI bot are often poor, she said.
Agency leaders and employees who first worked on the AI product during the Biden administration anticipated those types of difficulties. Escobar-Alava said they had worked on such a bot, but wanted to clean up the policy and regulation data it was relying on first.
“We wanted to ensure the automation produced consistent and accurate answers, which was going to take more time,” she said. Instead, it seems the Trump administration opted to introduce the bot first and troubleshoot later, Escobar-Alava said.
Romig said one former executive told her that the agency had used canned FAQs without modifications or nuances to accommodate individual situations and was monitoring the technology to see how well it performed. Escobar-Alava said she has heard similarly.
Could Automation Help?
To Bisignano, automation and web services are the most efficient ways to assist the program’s beneficiaries. In a , he said that agency leaders “are transforming SSA into a digital-first agency that meets customers where they want to be met,” making changes that allow the vast majority of calls to be handled either in an automated fashion or by having a human return the customer’s call.
Using these methods also relieves burdens on otherwise beleaguered field offices, Bisignano wrote.
Altering the phone experience is not the end of Bisignano’s tech dreams. The agency asked Congress for in additional funding for investments, which he intends to use for online scheduling, detecting fraud, and much more, according to a list submitted to the House in late June.
But outside experts and former employees said Bisignano overstated the novelty of the ideas he presented to Congress. The agency has been updating its technology for years, but that does not necessarily mean thousands of its workers are suddenly obsolete, Romig said. It’s not bad that the upgrades are continuing, she said, but progress has been more incremental than revolutionary.
Some changes focus on spiffing up the agency’s public face. Bisignano told House lawmakers that he oversaw a redesign of the agency’s performance-statistics page to emphasize the number of automated calls and deemphasize statistics about call wait times. He called the latter stats “discouraging” and suggested that displaying them online might dissuade beneficiaries from calling.
Warren said Bisignano has since told her privately that he would allow an “inspector general audit” of their customer-service quality data and pledged to make a list of performance information publicly available. The agency has since updated its performance statistics page.
Other changes would come at greater cost and effort. In April, the agency rolled out a security authentication program for direct deposit changes, requiring beneficiaries to verify their identity in person if what the agency described in regulatory documents as an “automated” analysis system detects anomalies.
According to the proposal, the agency estimated about 5.8 million beneficiaries would be affected — and that it would cost the federal government nearly $1.2 billion, mostly driven by staff time devoted to assisting claimants. The agency is asking for nearly $7.7 billion in the upcoming fiscal year for payroll overall.
Christopher Hensley, a financial adviser in Houston, said one of his clients called him in May after her bank changed its routing number and Social Security stopped paying her, forcing her to borrow money from her family.
It turned out that the agency had flagged her account for fraud. Hensley said she had to travel 30 minutes to the nearest Social Security office to verify her identity and correct the problem.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/aging/social-security-chatbot-customer-complaints-glitches/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2079454&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The initiative, dubbed the Million Veteran Program, is a “crown jewel of the country,” said David Shulkin, a physician who served as VA secretary during the first Trump administration. Data from the project has contributed to research on the genetics of anxiety and peripheral artery disease, for instance, and has resulted in hundreds of published papers. Researchers say the repository has the potential to help answer health questions not only specific to veterans — like who is most vulnerable to post-service mental health issues, or why they seem more prone to cancer — but also relevant to the nation as a whole.
“When the VA does research, it helps veterans, but it helps all Americans,” Shulkin said in an interview.
Researchers now say they fear the program is in limbo, jeopardizing the years of work it took to gather the veterans’ genetic data and other information, like surveys and blood samples.
“There’s sort of this cone of silence,” said Amy Justice, a Yale epidemiologist with a VA appointment as a staff physician. “We’ve got to make sure this survives.”
Genetic data is enormously complex, and analyzing it requires vast computing power that VA doesn’t possess. Instead, it has relied on a partnership with the Energy Department, which provides its supercomputers for research purposes.
In late April, VA Secretary Doug Collins disclosed to Sen. Richard Blumenthal, the top Democrat on the Senate Veterans’ Affairs Committee, that agreements authorizing use of the computers for the genomics project remained unsigned, with some expiring in September, according to materials shared with ºÚÁϳԹÏÍø News by congressional Democrats.
Spokespeople for the two agencies did not reply to multiple requests for comment. Other current and former employees within the agencies — who asked not to be identified, for fear of reprisal from the Trump administration — said they don’t know whether the critical agreements will be renewed.
One researcher called computing “a key ingredient” to major advances in health research, such as the discovery of new drugs.
The agreement with the Energy Department “should be extended for the next 10 years,” the researcher said.
The uncertainty has caused “incremental” damage, Justice said, pointing to some Million Veteran Program grants that have lapsed. As the year progresses, she predicted, “people are going to be feeling it a lot.”
Because of their military experience, maintaining veterans’ health poses different challenges compared with caring for civilians. The program’s examinations of genetic and clinical data allow researchers to investigate questions that have bedeviled veterans for years. As examples, Shulkin cited “how we might be able to better diagnose earlier and start thinking about effective treatments for these toxic exposures” — such as to burn pits used to dispose of trash at military outposts overseas — as well as predispositions to post-traumatic stress disorder.
“The rest of the research community isn’t likely to focus specifically” on veterans, he said. The VA community, however, has delivered discoveries of importance to the world: have won Nobel Prizes, and the agency created the first pacemaker. Its efforts also helped ignite the boom in GLP-1 weight loss drugs.
Yet turbulence has been felt throughout VA’s research enterprise. Like other government scientific agencies, it’s been buffeted by layoffs, contract cuts, and canceled research.
“There are planned trials that have not started, there are ongoing trials that have been stopped, and there are trials that have fallen apart due to staff layoffs — yes or no?” said Sen. Patty Murray (D-Wash.), pressing Collins in a May hearing of the Senate Veterans’ Affairs Committee.
The agency, which has a budget of roughly $1 billion for its research arm this fiscal year, has slashed infrastructure that supports scientific inquiry, according to documents shared with ºÚÁϳԹÏÍø News by Senate Democrats on the Veterans’ Affairs Committee. It has canceled at least 37 research-related contracts, including for genomic sequencing and for library and biostatistics services. The department has separately canceled four contracts for cancer registries for veterans, creating potential gaps in the nation’s statistics.
Job worries also consume many scientists at the VA.
According to agency estimates in May, about 4,000 of its workers are on term limits, with contracts that expire after certain periods. Many of these individuals worked not only for the VA’s research groups but also with clinical teams or local medical centers.
When the new leaders first entered the agency, they instituted a hiring freeze, current and former VA researchers told ºÚÁϳԹÏÍø News. That prevented the agency’s research offices from renewing contracts for their scientists and support staff, which in previous years had frequently been a pro forma step. Some of those individuals who had been around for decades haven’t been rehired, one former researcher told ºÚÁϳԹÏÍø News.
The freeze and the uncertainty around it led to people simply departing the agency, a current VA researcher said.
The losses, the individual said, include some people who “had years of experience and expertise that can’t be replaced.”
Preserving jobs — or some jobs — has been a congressional focus. In May, after inquiries from Sen. Jerry Moran, the Republican who chairs the Veterans’ Affairs Committee, about staffing for agency research and the Million Veteran Program, Collins wrote in a letter that he was extending the terms of research employees for 90 days and developing exemptions to the hiring freeze for the genomics project and other research initiatives.
Holding jobs is one thing — doing them is another. In June, at the annual research meeting of AcademyHealth — an organization of researchers, policymakers, and others who study how U.S. health care is delivered — some VA researchers were unable to deliver a presentation touching on psychedelics and mental health disparities and another on discrimination against LGBTQ+ patients, Aaron Carroll, the organization’s president, told ºÚÁϳԹÏÍø News.
At that conference, reflecting a trend across the federal government, researchers from the Centers for Medicare & Medicaid Services and the Agency for Healthcare Research and Quality also dropped out of presenting. “This drop in federal participation is deeply concerning, not only for our community of researchers and practitioners but for the public, who rely on transparency, collaboration, and evidence-based policy grounded in rigorous science,” Carroll said.
We’d like to speak with current and former personnel from the Department of Health and Human Services or its component agencies who believe the public should understand the impact of what’s happening within the federal health bureaucracy. Please message ºÚÁϳԹÏÍø News on Signal at (415) 519-8778 or .
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/military-genetic-database-million-veterans-dna-health-research-trump-va/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2059500&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>“That’s not part of the job of our employees or our tech supports,” said Ruth Elio, an occupational nurse who supervised the center’s workers when she spoke with ºÚÁϳԹÏÍø News last year. “Still, they’re doing that because it is important.”
Elio also helped workers with their own health problems, most frequently headaches or back pains, borne of a life of sitting for hours on end.
In a different call center, Kevin Asuncion transcribed medical visits from half a world away, in the United States. You can get used to the hours, he said in an interview last year: 8 p.m. to 5 a.m. His breaks were mostly spent sleeping; not much is open then.
Health risks and night shifts aside, call center workers have a new concern: artificial intelligence.
Startups are marketing AI products with lifelike voices to schedule or cancel medical visits, refill prescriptions, and help triage patients. Soon, many patients might initiate contact with the health system not by speaking with a call center worker or receptionist, but with AI. Zocdoc, the appointment-booking company, has introduced an automated assistant it says can schedule visits without human intervention 70% of the time.
The medically focused call center workforce in the Philippines is a vast one: 200,000 at the end of 2024, estimates industry trade group leader Jack Madrid. That figure is more than the number of paramedics in the United States at the end of 2023, according to the Bureau of Labor Statistics. And some employers are opening outposts in other countries, like India, while using AI to reshape or replace their workforces.
Still, it’s unclear whether AI’s digital manipulations could match the proverbial human touch. For example, a in Nature Medicine found that while some models can diagnose maladies when presented with a canned anecdote, as prospective doctors do in training, AI struggles to elicit information from simulated patients.
“The rapport, or the trust that we give, or the emotions that we have as humans cannot be replaced,” Elio said.
Sachin Jain, president and CEO of Scan Health Plan, an insurer, said humans have context that AI doesn’t have — at least for now. A receptionist at a small practice may know the patients well enough to pick up on subtle cues and communicate to the doctor that a particular caller is “somebody that you should see, talk to, that day, that minute, or that week.”
The turn toward call centers, while creating more distance between a caller and a health provider, preserved the human touch. Yet some agents at call centers and their advocates say the ways they are monitored on the job undermine care. At one Kaiser Permanente location, it’s a “very micromanaging environment,” said one nurse who asked not to provide her name for fear of reprisal.
“From the beginning of the shift to your end, you’re expected to take call after call after call from an open queue,” she said. Even when giving advice for complex cases, “there’s an unwritten rule on how long a nurse should take per call: 12 minutes.”
Meanwhile, the job is getting tougher, she said. “We’re the backup to the health care system. We’re open 24/7,” she said. “They’re calling about their incision sites, which are bleeding. Their child has asthma, and the instructions for the medications are not clear.”
One nurses union is protesting a potential AI management tool in the call centers.
“AI tools don’t make medical decisions,” Kaiser Permanente spokesperson Vincent Staupe told ºÚÁϳԹÏÍø News. “Our physicians and care teams are always at the center of decision-making with our patients and in all our care settings, including call centers.”
Kaiser Permanente is not affiliated with KFF, a health information nonprofit that includes ºÚÁϳԹÏÍø News.
Some firms cite 30% to 50% turnover rates — stats that some say make a case for turning over the job to AI.
Call centers “can’t keep people, because it’s just a really, really challenging job,” said Adnan Iqbal, co-founder and CEO of Luma Health, which creates AI products to automate some call center work. No wonder, “if you’re getting yelled at every 90 seconds by a patient, insurance company, a staff member, what have you.”
To hear business leaders tell it, their customers are frustrated: Instead of the human touch, patients get nothing at all, stymied by long wait times and harried, disempowered workers.
One time, Marissa Moore — an investor at OMERS Ventures — got a taste of patients’ frustrations when trying to schedule a visit by phone at five doctors’ offices. “In every single one, I got a third party who had no intel on providers in the office, their availability, or anything.”
These types of gripes are increasingly common — and getting the attention of investors and businesses.
Customer complaints are hitting the bottom lines of businesses — like health insurers, which can be rewarded by the federal government’s Medicare Advantage policies for better customer service.
When Scan noticed a drop in patient ratings for some of the medical providers in its insurance network, it learned those providers had switched to using centralized call centers. Customer service suffered, and the lower ratings translated into lower payments from the federal government, Jain said.
“There’s a degree of dissatisfaction that’s bubbling up among our patients,” he said.
So, for some businesses, the notion of a computer receptionist seems a welcome solution to the problem of ineffectual call centers. AI voices, which can convincingly mimic human voices, are “beyond uncanny valley,” said Richie Cartwright, the founder of Fella, a weight loss startup that used one AI product to call pharmacies and ask if they had GLP-1s in stock.
Prices have dropped, too. Google AI’s per-use price has dropped by 97%, company CEO Sundar Pichai .
Some boosters are excited to put the vision of AI assistants into action. Since the second Trump administration took office, policy initiatives by the quasi-agency known as the Department of Government Efficiency, led by Elon Musk, have using artificial intelligence bots for customer service at the Department of Education.
Most executives interviewed by ºÚÁϳԹÏÍø News — in the hospital, insurance, tech, and consultancy fields — were keen to emphasize that AI would complement humans, not replace them. Some resorted to jargon and claimed the technology might make call center nurses and employees more efficient and effective.
But some businesses are signaling that their AI models could replace human workers. Their websites hint at reducing reliance on staff. And they are developing pricing strategies based on reducing the need for labor, said Michael Yang, a venture capitalist at OMERS.
Yang described the prospect for businesses as a “we-share-in-the-upside kind of thing,” with startups pitching clients on paying them for the cost of 1½ hires and their AI doing the work of twice that number.
But providers are building narrow services at the moment. For example, the University of Arkansas for Medical Sciences started with a limited idea. The organization’s call center closes at 5 p.m. — meaning patients who try to cancel appointments after hours left a phone message, creating a backlog for workers to address the next morning that took time from other scheduling tasks and left canceled appointments unfilled. So they started by using an AI system provided by Luma Health to allow after-hours cancellations and have since expanded it to allow patients to cancel appointments all day.
Michelle Winfeld-Hanrahan, the health system’s chief clinical access officer, who oversees its deployment, said UAMS has plenty of ideas for more automation, including allowing patients to check on prior authorizations and leading them through post-discharge follow-up.
Many executives claim AI tools can complement, rather than replace, humans. One company says its product can measure “vocal biomarkers” — subtle changes in tone or inflection — that correlate with disease and supply that information to human employees interacting with the patient. Some firms are using large language models to summarize complex documents: pulling out obscure insurance policies, or needed information, for employees. Others are interested in AI guiding a human through a conversation.
Even if the technology isn’t replacing people, it is reshaping them. AI can be used to change humans’ behavior and presentation. Call center employees said in interviews that they knew of, or had heard omnipresent rumors of, or feared, a variety of AI tools.
At some Kaiser Permanente call centers, unionized employees protested — and successfully delayed — the implementation of an AI tool meant to measure “active listening,” a union flyer claimed.
And employees and executives associated with the call center workforce in the Philippines said they’d heard of other software tools, such as technology that changed Filipino accents to American ones. There’s “not a super huge need for that, given our relatively neutral accents, but we’ve seen that,” said Madrid, the trade group leader.
“Just because something can be automated doesn’t mean it should be,” he said.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/ai-call-centers-medical-receptionists-replaced-bots-human-touch-unions/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2036569&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>If you or someone you know may be experiencing a mental health crisis, contact the 988 Suicide & Crisis Lifeline by dialing or texting “988.”
Vince Lahey of Carefree, Arizona, embraces chatbots. From Big Tech products to “shady” ones, they offer “someone that I could share more secrets with than my therapist.”
He especially likes the apps for feedback and support, even though sometimes they berate him or lead him to fight with his ex-wife. “I feel more inclined to share more,” Lahey said. “I don’t care about their perception of me.”
There are a lot of people like Lahey.
Demand for mental health care has grown. Self-reported poor mental health days rose by 25% since the 1990s, analyzing survey data. According to the Centers for Disease Control and Prevention, suicide rates in 2022 that hadn’t been seen in nearly 80 years.
There are many patients who find a nonhuman therapist, powered by artificial intelligence, highly appealing — more appealing than a human with a reclining couch and stern manner. with begging for a therapist who’s “not on the clock,” who’s less judgmental, or who’s just less expensive.
Most people who need care don’t get it, said Tom Insel, former head of the National Institute of Mental Health, citing his former agency’s research. Of those who do, 40% receive “minimally acceptable care.”
“There’s a massive need for high-quality therapy,” he said. “We’re in a world in which the status quo is really crappy, to use a scientific term.”
Insel said engineers from OpenAI told him last fall that about 5% to 10% of the company’s then-roughly 800 million-strong user base rely on ChatGPT for mental health support.
Polling suggests these AI chatbots may be even more popular among young adults. A KFF poll found about 3 in 10 respondents ages 18 to 29 for mental or emotional health advice in the past year. Uninsured adults were about twice as likely as insured adults to report using AI tools. And nearly 60% of adult respondents who used a chatbot for mental health didn’t follow up with a flesh-and-blood professional.
The App Will Put You on the Couch
A burgeoning industry of apps offers AI therapists with human-like, often unrealistically attractive avatars serving as a sounding board for those experiencing anxiety, depression, and other conditions.
ºÚÁϳԹÏÍø News identified some 45 AI therapy apps in Apple’s App Store in March. While many charge steep prices for their services — one listed an annual plan for $690 — they’re still generally cheaper than talk therapy, which can cost hundreds of dollars an hour without insurance coverage.
On the App Store, “therapy” is often used as a marketing term, with small print noting the apps cannot diagnose or treat disease. One app, branded as OhSofia! AI Therapy Chat, had downloads in the six figures, said OhSofia! founder Anton Ilin in December.
“People are looking for therapy,” Ilin said. On one hand, the product’s name ; on the other, it warns in that it “does not provide medical advice, diagnosis, treatment, or crisis intervention and is not a substitute for professional healthcare services.” Executives don’t think that’s confusing, since there are disclaimers in the app.
The apps promise big results without backup. its users “immediate help during panic attacks.” it was “proven effective by researchers” and that it offers 2.3 times faster relief for anxiety and stress. (It doesn’t say what it’s faster than.)
There are few legislative or regulatory guardrails around how developers refer to their products — or even whether the products are safe or effective, said Vaile Wright, senior director of the office of health care innovation at the American Psychological Association. Even federal patient privacy protections don’t apply, she said.
“Therapy is not a legally protected term,” Wright said. “So, basically, anybody can say that they give therapy.”
Many of the apps “overrepresent themselves,” said John Torous, a psychiatrist and clinical informaticist at Beth Israel Deaconess Medical Center. “Deceiving people that they have received treatment when they really have not has many negative consequences,” including delaying actual care, he said.
States such as Nevada, Illinois, and California are trying to sort out the regulatory disarray, enacting laws forbidding apps from describing their chatbots as AI therapists.
“It’s a profession. People go to school. They get licensed to do it,” said Jovan Jackson, a Nevada legislator, who co-authored an enacted bill banning apps from referring to themselves as mental health professionals.
Underlying the hype, outside researchers and company representatives themselves have told the FDA and Congress that there’s little evidence supporting the efficacy of these products. What studies there are — and some companion-focused chatbots are “consistently poor” at managing crises.
“When it comes to chatbots, we don’t have any good evidence it works,” said Charlotte Blease, a professor at Sweden’s Uppsala University who specializes in trial design for digital health products.
The lack of “good quality” clinical trials stems from the FDA’s failure to provide recommendations about how to test the products, she said. “FDA is offering no rigorous advice on what the standards should be.”
Department of Health and Human Services spokesperson Emily Hilliard said, in response, that “patient safety is the FDA’s highest priority” and that AI-based products are subject to agency regulations requiring the demonstration of “reasonable assurance of safety and effectiveness before they can be marketed in the U.S.”
The Silver-Tongued Apps
Preston Roche, a psychiatry resident who’s , gets lots of questions about whether AI is a good therapist. After trying ChatGPT himself, he said he was “impressed” initially that it was able to use techniques to help him put negative thoughts “on trial.”
But Roche said after seeing posts on social media discussing people developing psychosis or being encouraged to make harmful decisions, he became disillusioned. The bots, he concluded, are sycophantic.
“When I look globally at the responsibilities of a therapist, it just completely fell on its face,” he said.
This sycophancy — the tendency of apps based on large language models to empathize, flatter, or delude their human conversation partner — is inherent to the app design, experts in digital health say.
“The models were developed to answer a question or prompt that you ask and to give you what you’re looking for,” said Insel, the former NIMH director, “and they’re really good at basically affirming what you feel and providing psychological support, like a good friend.”
That’s not what a good therapist does, though. “The point of psychotherapy is mostly to make you address the things that you have been avoiding,” he said.
While polling suggests many users are satisfied with what they’re getting out of ChatGPT and other apps, there have been about the service or encouragement to self-harm.
And or have been filed against OpenAI after ChatGPT users died by suicide or became hospitalized. In most of those cases, the plaintiffs allege they began using the apps for one purpose — like schoolwork — before confiding in them. These cases are being .
Google and the startup Character.ai — which has been funded by Google and has created “avatars” that adopt specific personas, like athletes, celebrities, study buddies, or therapists — are settling other wrongful-death lawsuits, .
OpenAI’s CEO, Sam Altman, has said up to may talk about suicide on ChatGPT.
“We have seen a problem where people that are in fragile psychiatric situations using a model like 4o can get into a worse one,” Altman said in a public question-and-answer session reported by , referring to a particular model of ChatGPT introduced in 2024. “I don’t think this is the last time we’ll face challenges like this with a model.”
An OpenAI spokesperson did not respond to requests for comment.
The company has said it on safeguards, such as referring users to 988, the national suicide hotline. However, the lawsuits against OpenAI argue existing safeguards aren’t good enough, and some research shows the problems are . OpenAI its own data suggesting the opposite.
OpenAI is , offering, early in one case, a variety of defenses ranging from denying that its product caused self-harm to alleging that the defendant misused the product by inducing it to discuss suicide. It has also said it’s working to .
Smaller apps also rely on OpenAI or other AI models to power their products, executives told ºÚÁϳԹÏÍø News. In interviews, startup founders and other experts said they worry that if a company simply imports those models into its own service, it might duplicate whatever safety flaws exist in the original product.
Data Risks
ºÚÁϳԹÏÍø News’ review of the App Store found listed age protections are minimal: Fifteen of the nearly four dozen apps say they could be downloaded by 4-year-old users; an additional 11 say they could be downloaded by those 12 and up.
Privacy standards are opaque. On the App Store, several apps are described as neither tracking personally identifiable data nor sharing it with advertisers — but on their company websites, privacy policies contained contrary descriptions, discussing the use of such data and their disclosure of information to advertisers, like AdMob.
In response to a request for comment, Apple spokesperson Adam Dema to the company’s App Store policies, which bar apps from using health data for advertising and require them to display information about how they use data in general. Dema did not respond to a request for further comment about how Apple enforces these policies.
Researchers and policy advocates said that sharing psychiatric data with social media firms means patients could be profiled. They could be targeted by dodgy treatment firms or charged different prices for goods based on their health.
ºÚÁϳԹÏÍø News contacted several app makers about these discrepancies; two that responded said their privacy policies had been put together in error and pledged to change them to reflect their stances against advertising. (A third, the team at OhSofia!, said simply that they don’t do advertising, though their app’s notes users “may opt out of marketing communications.”)
One executive told ºÚÁϳԹÏÍø News there’s business pressure to maintain access to the data.
“My general feeling is a subscription model is much, much better than any sort of advertising,” said Tim Rubin, the founder of Wellness AI, adding that he’d change the description in his app’s privacy policy.
One investor advised him not to swear off advertising, he said. “They’re like, essentially, that’s the most valuable thing about having an app like this, that data.”
“I think we’re still at the beginning of what’s going to be a revolution in how people seek psychological support and, even in some cases, therapy,” Insel said. “And my concern is that there’s just no framework for any of this.”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/mental-health/ai-chatbots-therapy-big-risks-few-regulations/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2228281&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>This year, executives from nearly every major health insurance company made the same declaration in calls with Wall Street analysts: Using artificial intelligence to make coverage decisions would help save them money.
Even the Trump administration is in managing the prior authorization process for the Medicare program, as well as seeking to override AI regulation by states.
But class action lawsuits have accused insurers of using AI to wrongfully withhold treatment. And outlines the risks of training AI on a current system rife with wrongful denials.
“There is a world in which using AI could make that worse, or at least replicate a bad human system, because the data that it would be training on is from that bad human system,” said Michelle Mello, a co-author of the study.
Although, Mello said, the research team found “real positives alongside the risks.”
In this video produced by ºÚÁϳԹÏÍø News’ Hannah Norman, Darius Tahir, a correspondent covering health technology, explains.
You can read Tahir’s recent coverage of AI’s use by health insurers below:
This <a target="_blank" href="/courts/watch-ai-artificial-intelligence-prior-authorization-insurance-coverage-decisions/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2181021&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>It’s a bold-sounding promise, and a familiar one; politicians from both parties have been repeating it for years now. Both Trump administrations — and the Biden administration in between — have taken whacks at making medical prices more accessible, with the goal of empowering patients to shop for better deals.
The idea makes intuitive sense. Why shouldn’t you be able to compare the prices of MRI scans, for instance?
The feds have made some strides. Prices are available, albeit in confusing or fragmentary form. But there’s one big problem: “There’s no evidence that patients use this information,” said Zack Cooper, a health economist at Yale University.
Health care is an inherently complicated marketplace. For one thing, it’s not as simple as one price for one medical stay. Two babies might be delivered by the same obstetrician, for example, but the mothers could be charged very different amounts. One patient might be given medications to speed up contractions; another might not. Or one might need an emergency cesarean section — one of many cases in medicine in which obtaining the service simply isn’t a choice. Plus, the same hospital typically has different contract terms with each insurer, making comparing prices even more difficult for patients.
Instead of helping consumers sort things out, this federally mandated price data largely has become a tool for providers and insurers, looking for intel about their competitors — so they can use it at the negotiating table in a quest for more advantageous rates.
“We use the transparency data,” said Eric Hoag, an executive at Blue Cross Blue Shield of Minnesota, noting that the insurer wants to make sure health care providers aren’t being paid substantially different rates. It’s “to make sure that we are competitive, or, you know, more than competitive against other health plans.”
For all those tugs-of-war, it’s not clear these policies have had much of an effect overall. Research shows that transparency policies can have mixed effects on prices, with of a New York initiative finding a marginal increase in billed charges.
Price isn’t the only piece of information negotiations hinge on. Hoag said Blue Cross Blue Shield of Minnesota also considers quality of care, rates of unnecessary treatments, and other factors. And sometimes negotiators feel they keep up with their peers — claiming a need for more revenue to match competitors’ salaries, for example.
Hoag said doctors and other care providers often look at the data from comparable health systems and say, “‘I need to be paid more.’”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-care-costs/the-week-in-brief-hospital-price-transparency-tools/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2159544&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>Regulating artificial intelligence, especially its use by health insurers, is becoming a politically divisive topic, and it’s scrambling traditional partisan lines.
Boosters, led by Trump, are not only pushing its integration into government, as in in prior authorization, but also trying to stop others from building curbs and guardrails. A December seeks to preempt most state efforts to govern AI, describing “a race with adversaries for supremacy” in a new “technological revolution.”
“To win, United States AI companies must be free to innovate without cumbersome regulation,” Trump’s order said. “But excessive State regulation thwarts this imperative.”
Across the nation, states are in revolt. At least four — Arizona, Maryland, Nebraska, and Texas — enacted legislation last year reining in the use of AI in health insurance. Two others, Illinois and California, enacted bills the year before.
Legislators in Rhode Island plan to try again this year after a bill requiring regulators to collect data on technology use failed to clear both chambers last year. A bill in North Carolina requiring insurers not to use AI as the sole basis of a coverage decision attracted significant interest from Republican legislators last year.
DeSantis, a former GOP presidential candidate, has rolled out an “AI Bill of Rights,” include restrictions on its use in processing insurance claims and a requirement allowing a state regulatory body to inspect algorithms.
“We have a responsibility to ensure that new technologies develop in ways that are moral and ethical, in ways that reinforce our American values, not in ways that erode them,” DeSantis said during his State of the State address in January.
Ripe for Regulation
Polling shows Americans are skeptical of AI. A from Fox News found 63% of voters describe themselves as “very” or “extremely” concerned about artificial intelligence, including majorities across the political spectrum. Nearly two-thirds of Democrats and just over 3 in 5 Republicans said they had qualms about AI.
Health insurers’ tactics to hold down costs also trouble the public; from KFF found widespread discontent over issues like prior authorization. (KFF is a health information nonprofit that includes ºÚÁϳԹÏÍø News.) Reporting and in recent years has highlighted the use of algorithms to rapidly deny insurance claims or prior authorization requests, apparently with little review by a doctor.
Last month, the House Ways and Means Committee hauled in executives from Cigna, UnitedHealth Group, and other major health insurers to address concerns about affordability. When pressed, the executives either denied or avoided talking about using the most advanced technology to reject authorization requests or toss out claims.
AI is “never used for a denial,” Cigna CEO David Cordani told lawmakers. Like others in the health insurance industry, the company is being sued for its methods of denying claims, as spotlighted by ProPublica. Cigna spokesperson Justine Sessions said the company’s claims-denial process “is not powered by AI.”
Indeed, companies are at pains to frame AI as a loyal servant. Optum, part of health giant UnitedHealth Group, announced Feb. 4 that it was rolling out tech-powered prior authorization, with plenty of mentions of speedier approvals.
“We’re transforming the prior authorization process to address the friction it causes,” John Kontor, a senior vice president at Optum,
Still, Alex Bores, a computer scientist and New York Assembly member prominent in the state’s legislative debate over AI, which culminated in a comprehensive bill governing the technology, said AI is a natural field to regulate.
“So many people already find the answers that they’re getting from their insurance companies to be inscrutable,” said Bores, a Democrat who is running for Congress. “Adding in a layer that cannot by its nature explain itself doesn’t seem like it’ll be helpful there.”
At least some people in medicine — doctors, for example — are cheering legislators and regulators on. The American Medical Association “supports state regulations seeking greater accountability and transparency from commercial health insurers that use AI and machine learning tools to review prior authorization requests,” said John Whyte, the organization’s CEO.
Whyte said insurers already use AI and “doctors still face delayed patient care, opaque insurer decisions, inconsistent authorization rules, and crushing administrative work.”
Insurers Push Back
With legislation approved or pending in at least nine states, it’s unclear how much of an effect the state laws will have, said University of Minnesota law professor Daniel Schwarcz. States can’t regulate “self-insured” plans, which are used by many employers; only the federal government has that power.
But there are deeper issues, Schwarcz said: Most of the state legislation he’s seen would require a human to sign off on any decision proposed by AI but doesn’t specify what that means.
The laws don’t offer a clear framework for understanding how much review is enough, and over time humans tend to become a little lazy and simply sign off on any suggestions by a computer, he said.
Still, insurers view the spate of bills as a problem. “Broadly speaking, regulatory burden is real,” said Dan Jones, senior vice president for federal affairs at the Alliance of Community Health Plans, a trade group for some nonprofit health insurers. If insurers spend more time working through a patchwork of state and federal laws, he continued, that means “less time that can be spent and invested into what we’re intended to be doing, which is focusing on making sure that patients are getting the right access to care.”
Linda Ujifusa, a Democratic state senator in Rhode Island, said insurers came out last year against the bill she sponsored to restrict AI use in coverage denials. It passed in one chamber, though not the other.
“There’s tremendous opposition” to anything that regulates , she said, and “tremendous opposition” to identifying intermediaries such as private insurers or pharmacy benefit managers “as a problem.”
In a , AHIP, an insurer trade group, advocated for “balanced policies that promote innovation while protecting patients.”
“Health plans recognize that AI has the potential to drive better health care outcomes — enhancing patient experience, closing gaps in care, accelerating innovation, and reducing administrative burden and costs to improve the focus on patient care,” Chris Bond, an AHIP spokesperson, told ºÚÁϳԹÏÍø News. And, he continued, they need a “consistent, national approach anchored in a comprehensive federal AI policy framework.”
Seeking Balance
In California, Newsom has signed some laws regulating AI, including one requiring health insurers to ensure their algorithms are fairly and equitably applied. But the Democratic governor has vetoed others with a broader approach, such as a bill including more mandates about how the technology must work and requirements to disclose its use to regulators, clinicians, and patients upon request.
Chris Micheli, a Sacramento-based lobbyist, said the governor likely wants to ensure the state budget — consistently powered by outsize stock market gains, especially from tech companies — stays flush. That necessitates balance.
Newsom is trying to “ensure that financial spigot continues, and at the same time ensure that there are some protections for California consumers,” he said. He added insurers believe they’re subject to a welter of regulations already.
The Trump administration seems persuaded. The president’s recent executive order proposed to sue and restrict certain federal funding for any state that enacts what it characterized as “excessive” state regulation — with some exceptions, including for policies that protect children.
That order is possibly unconstitutional, said Carmel Shachar, a health policy scholar at Harvard Law School. The source of preemption authority is generally Congress, she said, and federal lawmakers twice took up, but ultimately declined to pass, a provision barring states from regulating AI.
“Based on our previous understanding of federalism and the balance of powers between Congress and the executive, a challenge here would be very likely to succeed,” Shachar said.
Some lawmakers view Trump’s order skeptically at best, noting the administration has been removing guardrails, and preventing others from erecting them, to an extreme degree.
“There isn’t really a question of, should it be federal or should it be state right now?” Bores said. “The question is, should it be state or not at all?”
Do you have an experience navigating prior authorization to get medical treatment that you’d like to share with us for our reporting? .
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/insurance/artificial-intelligence-ai-health-insurance-companies-state-regulation-trump/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2154202&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The idea echoes a policy implemented during his first term, when Trump suggested that requiring hospitals to post their charges online could ease one of the most common gripes about the health care system — the lack of upfront prices. To anyone who’s three months after treatment only to find mysterious charges, the idea seemed intuitive.
“You’re able to go online and compare all of the hospitals and the doctors and the prices,” Trump said in 2019 at an event unveiling the price transparency policy.
But amid low compliance and other struggles in implementing the policy since it took effect in 2021, the available price data is sparse and often confusing. And instead of patients shopping for medical services, it’s mostly health systems and insurers using the little data there is, turning it into fodder for negotiations that determine what medical professionals and facilities get paid for what services.
“We use the transparency data,” said Eric Hoag, an executive at Blue Cross Blue Shield of Minnesota, noting that the insurer wants to make sure providers aren’t being paid substantially different rates. It’s “to make sure that we are competitive, or, you know, more than competitive against other health plans.”
Not all hospitals have fallen in line with the price transparency rules, and many were slow to do so. conducted in the policy’s first 10 months found only about a third of facilities had complied with the regulations. The federal Centers for Medicare & Medicaid Services from June 2022 to May 2025 that they would be fined for lack of compliance with the rules.
The struggles to make health care prices available have prompted more federal action since Trump’s first effort. President Joe Biden took his own thwack at the dilemma, by requiring and toughening compliance criteria. And in early 2025, working to fulfill his promises to lower health costs, Trump tried again, signing a new executive order urging his administration to fine hospitals and doctors for failing to post their prices. CMS followed up with a regulation intended to up the fines and increase the level of detail required within the pricing data.
So far, “there’s no evidence that patients use this information,” said Zack Cooper, a health economist at Yale University.
In 2021, Cooper co-authored based on data from a large commercial insurer. The researchers found that, on average, patients who need an MRI pass six lower-priced imaging providers on the way from their homes to an appointment for a scan. That’s because they follow their physician’s advice about where to receive care, the study showed.
Executives and researchers interviewed by ºÚÁϳԹÏÍø News also didn’t think opening the data would change prices in a big way. Research shows that transparency policies can have mixed effects on prices, with of a New York initiative finding a marginal increase in billed charges.
The policy results thus far seem to put a damper on long-held hopes, particularly from the GOP, that providing more price transparency would incentivize patients to find the best deal on their imaging or knee replacements.
These aspirations have been unfulfilled for a few reasons, researchers and industry insiders say. Some patients simply don’t compare services. But unlike with apples — a Honeycrisp and a Red Delicious are easy to line up side by side — medical services are hard to compare.
For one thing, it’s not as simple as one price for one medical stay. Two babies might be delivered by the same obstetrician, for example, but the mothers could be charged very different amounts. One patient might be given medications to speed up contractions; another might not. Or one might need an emergency cesarean section — one of many cases in medicine in which obtaining the service simply isn’t a choice.
And the data often is presented in a way that’s not useful for patients, sometimes buried in spreadsheets and requiring a deep knowledge of billing codes. In computing these costs, hospitals make “detailed assumptions about how to apply complex contracting terms and assess historic data to create a reasonable value for an expected allowed amount,” the American Hospital Association in July 2025 amid efforts to boost transparency.
Costs vary because hospitals’ contracts with insurers vary, said Jamie Cleverley, president of Cleverley and Associates, which works with health care providers to help them understand the financial impacts of changing contract terms. The cost for a patient with one health plan may be very different than the cost for the next patient with another plan.
The fact that hospital prices might be confusing for patients is a consequence of the lack of standardization in contracts and presentation, Cleverley said. “They’re not being nefarious.”
“Until we kind of align as an industry, there’s going to continue to be this variation in terms of how people look at the data and the utility of it,” he said.
Instead of aiding shoppers, the federally mandated data has become the foundation for negotiations — — over the proper level of compensation.
The top use for the pricing data for health care providers and payers, such as insurers, is “to use that in their contract negotiations,” said Marcus Dorstel, an executive at price transparency startup Turquoise Health.
Turquoise Health assembles price data by grouping codes for services together using machine learning, a type of artificial intelligence. It is just one example in a cottage industry of startups offering insights into prices. And, online, the startups’ advertisements hawking their wares often focus on hospitals and their periodic jousts with insurers. Turquoise has payers and providers as clients, Dorstel said.
“I think nine times out of 10 you will hear them say that the price transparency data is a vital piece of the contract negotiation now,” he said.
Of course, prices aren’t the only variable that negotiations hinge on. Hoag said Blue Cross Blue Shield of Minnesota also considers quality of care, rates of unnecessary treatments, and other factors. And sometimes negotiators feel as if they have to keep up with their peers — claiming a need for more revenue to match competitors’ salaries, for example.
Hoag said doctors and other providers often look at the data from comparable health systems and say, “‘I need to be paid more.’”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/price-transparency-trump-hospitals-insurers-health-care-costs/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2152333&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>“I don’t think people should be taking medical advice from me,” Kennedy Democratic congressman in May.
Kennedy once expressed different views — for example, about the need to proselytize about exercise. As he , he wants to use the “bully pulpit” to “obliterate the delicacy” with which Americans discuss fitness and explain that “suffering” is virtuous.
“We need to establish an ethic that you’re not a good parent unless your kids are doing some kind of physical activity,” Kennedy told the podcaster in September 2024.
The Department of Health and Human Services is tasked with communicating information to protect and improve the health and well-being of every American. It provides reminders about vaccinations and screenings; alerts about which food is unsafe; and useful, everyday tips about subjects such as sunscreen and, yes, exercise.
Under Kennedy’s watch, though, HHS has compromised once-fruitful campaigns promoting immunizations and other preventive health measures. On Instagram, the agency often emphasizes Kennedy’s personal causes, his pet projects, or even the secretary himself. Former agency employees say communications have a more political edge, with “Make America Healthy Again” frequently featured in press releases.
Interviews with over 20 former and current agency employees provide a look inside a health department where personality and politics steer what is said to the public. ºÚÁϳԹÏÍø News granted many of these people anonymity because they fear retribution.
One sign of change is what is no longer, or soon will not be, amplified — for instance, acclaimed anti-smoking campaigns making a dent in one of Kennedy’s priorities, chronic disease.
Another sign is what gets celebrated. On the official HHS Instagram account this year, out were posts saluting Juneteenth and Father’s Day. In, under Kennedy, were posts and .
Commenting on such changes, HHS spokesperson Andrew Nixon said in an email that “DEI is gone, thanks to the Trump administration.”
Some elected officials are pointedly not promoting Kennedy as a source of health care information. Regarding the secretary’s announcement citing unproven links between Tylenol and autism, Senate Majority Leader John Thune told MSNBC that, “if I were a woman, I’d be talking to my doctor and not taking, you know, advice from RFK or any other government bureaucrat, for that matter.” (Thune’s office did not respond to a request for comment.)
At since January show trust in Kennedy as a medical adviser is low. , from The Economist and YouGov, barely over a quarter of respondents said they trusted Kennedy “a lot” or “somewhat.”
The department’s online messaging looks “a lot more like propaganda than it does public health,” said Kevin Griffis, who worked in communications at the CDC under President Joe Biden .
Transition to a New Administration
The new administration inaugurated dramatic changes. Upon arrival, political appointees froze the health agency’s outside communications on a broader scale than in previous changeovers, halting everything from routine webpage updates to meetings with grant recipients. The pause created logistical snafus: For example, one CDC employee described being forced to cancel, and later rebook, advertising campaigns — at greater cost to taxpayers.
Even before the gag order was lifted in the spring, the tone and direction of HHS’ public communications had shifted.
According to data shared by iSpot.tv, a market research firm that tracks television advertising, at least four HHS ads about vaccines ended within two weeks of Trump’s inauguration.
“Flu campaigns were halted,” during a season in which a died from influenza, Deb Houry, who had resigned as the CDC’s chief medical officer, said in a Sept. 17 congressional hearing.
Instead of urging people to get vaccinated, HHS officials contemplated more-ambivalent messaging, said Griffis, then the CDC’s director of communications. According to Griffis, other former agency employees, and communications reviewed by ºÚÁϳԹÏÍø News, Nixon contemplated a campaign that would put more emphasis on vaccine risks. It would “be promoting, quote-unquote, ‘informed choice,’” Griffis said.
Nixon called the claim “categorically false.” Still, the department continues to push anti-vaccine messaging. In November, the CDC to assert the false claim that vaccines may cause autism.
Messaging related to tobacco control has been pulled back, according to Brian King, an executive at the Campaign for Tobacco-Free Kids, as well as multiple current and former CDC employees. Layoffs, administrative leaves, and funding turmoil have drained offices at the CDC and the FDA focused on educating people about the risks of smoking and vaping, King said.
Four current and former CDC employees told ºÚÁϳԹÏÍø News that “Tips From Former Smokers,” a campaign credited with helping approximately a million people quit smoking, is in danger. Ordinarily, a contract for the next year’s campaign would have been signed by now. But, as of Nov. 21, there was no contractor, the current and former employees said.
Nixon did not respond to a question from ºÚÁϳԹÏÍø News regarding plans for the program.
“We’re currently in an apocalypse for national tobacco education campaigns in this country,” King said.
Kennedy’s HHS has a different focus for its education campaigns, including the “Take Back Your Health” campaign, for which the department this year to produce “viral” and “edgy” content to urge Americans to exercise.
An earlier version of the campaign’s solicitation asked for partners to boost wearables, such as gadgets that track steps or glucose levels — reflecting a for every American to be wearing such a device within four years.
The source of funds for the exercise campaign? In the spring, leadership of multiple agencies discussed using funding for the CDC’s Tips From Former Smokers campaign, employees from those agencies said. By the fall, the smoking program hadn’t spent all its funds, the current and former CDC employees said.
Nixon did not respond to questions about the source of funding for the exercise campaign.
Food Fight
At the FDA, former employees said they noticed new types of political interference as Trump officials took the reins, sometimes making subtle tweaks to public communications, sometimes changing wholesale what messages went out. The interventions into messaging — what was said, but also what went unsaid — proved problematic, they said.
Early this year, multiple employees told ºÚÁϳԹÏÍø News, Nixon gave agency employees a quick deadline to gather a list of all policy initiatives underway on infant formula. That was then branded “,” as if it were a new push by a new administration.
Marianna Naum, a former acting director of external communications and consumer education at the FDA, said she supports parts of the Trump administration’s agenda. But she said she disagreed with how it handled Operation Stork Speed. “It felt like they were trying to put out information so they can say: ‘Look at the great work. Look how fast we did it,’” she said.
Nixon called the account “false” without elaborating. ºÚÁϳԹÏÍø News spoke with three other employees with the same recollections of the origins of Operation Stork Speed.
“Things that didn’t fit within their agenda, they were downplayed,” Naum said.
For example, she said, Trump political appointees resisted a proposed press release noting agency approval of cell-cultured pork — that is, pork grown in a lab. Similar products have raised the ire of ranchers and farmers working in typically GOP-friendly industries. States such as Florida have .
The agency ultimately issued . But a review of the agency’s archives showed it hasn’t put out press releases about two later approvals of cell-cultured meat.
Wide-ranging layoffs have also hit the FDA’s food office hard, leaving fewer people to make sure news gets distributed properly and promptly. Former employees say notices about recalled foods aren’t circulated as widely as they used to be, meaning fewer eyeballs on alerts about contaminated , , and the like.
Nixon did not respond to questions about changes in food recalls. Overall, Nixon answered nine of 53 questions posed by ºÚÁϳԹÏÍø News.
Pushing Politics
Televised HHS public service campaigns earned nearly 7.3 billion fewer impressions in the first half of 2025 versus the same period in 2022, according to iSpot data, with the drop being concentrated in pro-vaccine messaging. Other types of ads, such as those covering substance use and mental health, also fell. Data from the marketing intelligence firm Sensor Tower shows similar drops in HHS ad spending online.
With many of the longtime professionals laid off and new political appointees in place atop the hierarchy, a new communications strategy — bearing the hallmarks of Kennedy’s personality — is being built, said the current and former HHS employees, plus public health officials interviewed by ºÚÁϳԹÏÍø News.
Whereas in 2024, the agency would mostly post public health resources such as the 988 suicide hotline on its Instagram page, its feed in 2025 features more of the health secretary himself. Through the end of August, according to a ºÚÁϳԹÏÍø News review, 77 of its 101 posts featured Kennedy — often fishing, biking, or doing pullups, as well as pitching his policies.
By contrast, only 146 of the agency’s 754 posts last year, or about 20%, featured Xavier Becerra, Kennedy’s predecessor.
In 2024, on Instagram, the agency promoted Medicare and individual insurance open enrollment; in 2025, the agency has not.
In 2024, the agency’s Instagram feed included some politicking as Biden ran for reelection, but the posts were less frequent and often indirect — for instance, touting a policy enacted under Biden’s signature legislation, the Inflation Reduction Act, but without mentioning the name of the bill or its connection to the president.
In 2025, sloganeering is a frequent feature of the agency’s Kennedy-era Instagram. Through the end of August, “Make America Healthy Again” or variants of the catchphrase featured in at least 48% of posts.
Amid the layoffs, the agency made a notable addition to its team. It hired a state legislative spokesperson as a “rapid response” coordinator, a role that employees from previous administrations couldn’t recall previously existing at HHS.
“Like other Trump administration agencies, HHS is continuously rebutting fake news for the benefit of the public,” Nixon said when asked about the role.
On the day Houry and Susan Monarez, the CDC leader ousted in late August, testified before senators about Kennedy’s leadership, the agency’s X feed posted clips belittling the former officials. The department also derisively rebuts unfavorable news coverage.
“It’s very interesting to watch the memeification of the United States and critical global health infrastructure,” said McKenzie Wilson, an HHS spokesperson under Biden. “The entire purpose of this agency is to inform the public about safety, emergencies as they happen.”
‘Clear, Powerful Messages From Bobby’
Kennedy’s , released in September, proposes public awareness campaigns on subjects such as illegal vaping and fluoride levels in water, while reassuring Americans that the regulatory system for pesticides is “robust.”
Those priorities reflect — and are amplified by — cadres of activists outside government. Since the summer, HHS officials have appeared on Zoom calls with aligned advocacy groups, trying to drum up support for Kennedy’s agenda.
— on which, according to host Tony Lyons, activists “representing over 250 million followers on social media” were registered — famous names such as motivational speaker Tony Robbins gave pep talks about how to influence elected officials and the public.
“Each week, you’re gonna get clear, powerful messages from Bobby, from HHS, from their team,” Robbins said. “And your mission is to amplify it, to make it your own, to speak from your soul, to be bold, to be relentless, to be loving, to be loud, you know, because this is how we make the change.”
The communications strategy captivates the public, but it also confuses it.
Anne Zink, formerly the chief medical officer for Alaska, said she thought Kennedy’s messaging was some of the catchiest of any HHS director.
But, she said, in her work as an emergency physician, she’s seen the consequences of his health department’s policies on her puzzled patients. Patients question vaccines. Children show up with gastrointestinal symptoms Zink says she suspects are related to raw milk consumption.
“I increasingly see people say, ‘I just don’t know what to trust, because I just hear all sorts of things out there,’” she said.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/robert-kennedy-rfk-maha-hhs-cdc-social-media-vaccines-tobacco/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2122845&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The pilot program, designed to weed out wasteful, “low-value” services, amounts to a federal expansion of an unpopular process called prior authorization, which requires patients or someone on their medical team to seek insurance approval before proceeding with certain procedures, tests, and prescriptions. It will affect Medicare patients, and the doctors and hospitals who care for them, in Arizona, Ohio, Oklahoma, New Jersey, Texas, and Washington, starting Jan. 1 and running through 2031.
The move has raised eyebrows among politicians and policy experts. The traditional version of Medicare, which covers adults 65 and older and some people with disabilities, has mostly eschewed prior authorization. Still, it is widely used by private insurers, especially in the Medicare Advantage market.
And the timing was surprising: The pilot was , just days after the Trump administration unveiled a voluntary effort by private health insurers to revamp and reduce their own use of prior authorization, which causes care to be “significantly delayed,” said Mehmet Oz, administrator of the Centers for Medicare & Medicaid Services.
“It erodes public trust in the health care system,” Oz told the media. “It’s something that we can’t tolerate in this administration.”
But some critics, like Vinay Rathi, an Ohio State University doctor and policy researcher, have accused the Trump administration of sending mixed messages.
On one hand, the federal government wants to borrow cost-cutting measures used by private insurance, he said. “On the other, it slaps them on the wrist.”
Administration officials are “talking out of both sides of their mouth,” said Rep. Suzan DelBene, a Washington Democrat. “It’s hugely concerning.”
Patients, doctors, and other lawmakers have also been critical of what they see as delay-or-deny tactics, which can slow down or block access to care, causing irreparable harm and even death.
“Insurance companies have put it in their mantra that they will take patients’ money and then do their damnedest to deny giving it to the people who deliver care,” said Rep. Greg Murphy, a North Carolina Republican and a urologist. “That goes on in every insurance company boardroom.”
Insurers have long argued that prior authorization reduces fraud and wasteful spending, as well as prevents potential harm. Public displeasure with insurance denials dominated the news in December, when the shooting death of UnitedHealthcare’s CEO led many to anoint his alleged killer as a folk hero.
And the public broadly dislikes the practice: Nearly three-quarters of respondents thought prior authorization was a “major” problem in , a health information nonprofit that includes ºÚÁϳԹÏÍø News.
Indeed, Oz said during his June press conference that “violence in the streets” prompted the Trump administration to take on the issue of prior authorization reform in the private insurance industry.
Still, the administration is expanding the use of prior authorization in Medicare. CMS spokesperson Alexx Pons said both initiatives “serve the same goal of protecting patients and Medicare dollars.”
Unanswered Questions
The , WISeR — short for “Wasteful and Inappropriate Service Reduction” — will test the use of an AI algorithm in making prior authorization decisions for some Medicare services, including skin and tissue substitutes, electrical nerve stimulator implants, and knee arthroscopy.
The federal government says such procedures are particularly vulnerable to “fraud, waste, and abuse” and could be held in check by prior authorization.
Other procedures may be added to the list. But services that are inpatient-only, emergency, or “would pose a substantial risk to patients if significantly delayed” would not be subject to the AI model’s assessment, according to the federal announcement.
While the use of AI in health insurance isn’t new, Medicare has been slow to adopt the private-sector tools. Medicare has historically used prior authorization in a limited way, with contractors who aren’t incentivized to deny services. But experts who have studied the plan believe the federal pilot could change that.
Pons told ºÚÁϳԹÏÍø News that no Medicare request will be denied before being reviewed by a “qualified human clinician,” and that vendors “are prohibited from compensation arrangements tied to denial rates.” While the government says vendors will be rewarded for savings, Pons said multiple safeguards will “remove any incentive to deny medically appropriate care.”
“Shared savings arrangements mean that vendors financially benefit when less care is delivered,” a structure that can create a powerful incentive for companies to deny medically necessary care, said Jennifer Brackeen, senior director of government affairs for the Washington State Hospital Association.
And doctors and policy experts say that’s only one concern.
Rathi said the plan “is not fully fleshed out” and relies on “messy and subjective” measures. The model, he said, ultimately depends on contractors to assess their own results, a choice that makes the results potentially suspect.
“I’m not sure they know, even, how they’re going to figure out whether this is helping or hurting patients,” he said.
Pons said the use of AI in the Medicare pilot will be “subject to strict oversight to ensure transparency, accountability, and alignment with Medicare rules and patient protection.”
“CMS remains committed to ensuring that automated tools support, not replace, clinically sound decision-making,” he said.
Experts agree that AI is theoretically capable of expediting what has been a cumbersome process marked by delays and denials that can harm patients’ health. Health insurers have argued that AI eliminates human error and bias and will save the health care system money. These companies have also insisted that humans, not computers, are ultimately reviewing coverage decisions.
But some scholars are doubtful that’s routinely happening.
“I think that there’s also probably a little bit of ambiguity over what constitutes ‘meaningful human review,’” said Amy Killelea, an assistant research professor at the Center on Health Insurance Reforms at Georgetown University.
A 2023 found that, over a two-month period, doctors at Cigna who reviewed requests for payment spent an average of only 1.2 seconds on each case.
Cigna spokesperson Justine Sessions told ºÚÁϳԹÏÍø News that the company does not use AI to deny care or claims. The ProPublica investigation referenced a “simple software-driven process that helped accelerate payments to clinicians for common, relatively low-cost tests and treatments, and it is not powered by AI,” Sessions said. “It was not used for prior authorizations.”
And yet class-action lawsuits filed against major health insurers have alleged that flawed AI models undermine doctor recommendations and fail to take patients’ unique needs into account, forcing some people to shoulder the financial burden of their care.
Meanwhile, a by the American Medical Association in February found that 61% think AI is “increasing prior authorization denials, exacerbating avoidable patient harms and escalating unnecessary waste now and into the future.”
Chris Bond, a spokesperson for the insurers’ trade group AHIP, told ºÚÁϳԹÏÍø News that the organization is “zeroed in” on implementing the commitments made to the government. Those include reducing the scope of prior authorization and making sure that communications with patients about denials and appeals are easy to understand.
‘This Is a Pilot’
The Medicare pilot program underscores ongoing concerns about prior authorization and raises new ones.
While private health insurers have been opaque about how they use AI and the extent to which they use prior authorization, policy researchers believe these algorithms are often programmed to automatically deny high-cost care.
“The more expensive it is, the more likely it is to be denied,” said Jennifer Oliva, a professor at the Maurer School of Law at Indiana University-Bloomington, whose work focuses on AI regulation and health coverage.
Oliva explained in a recent that when a patient is expected to die within a few years, health insurers are “motivated to rely on the algorithm.” As time passes and the patient or their provider is forced to appeal a denial, the chance of the patient dying during that process increases. The longer an appeal, the less likely the health insurer is to pay the claim, Oliva said.
“The No. 1 thing to do is make it very, very difficult for people to get high-cost services,” she said.
As the use of AI by health insurers is poised to grow, insurance company algorithms amount to a “regulatory blind spot” and demand more scrutiny, said Carmel Shachar, a faculty director at Harvard Law School’s Center for Health Law and Policy Innovation.
The WISeR pilot is “an interesting step” toward using AI to ensure that Medicare dollars are purchasing high-quality health care, she said. But the lack of details makes it difficult to determine whether it will work.
Politicians are grappling with some of the same questions.
“How is this being tested in the first place? How are you going to make sure that it is working and not denying care or producing higher rates of care denial?” asked DelBene, who to Oz with other Democrats demanding answers about the AI program. But Democrats aren’t the only ones worried.
Murphy, who co-chairs the House GOP Doctors Caucus, acknowledged that many physicians are concerned the WISeR pilot could overreach into their practice of medicine if the AI algorithm denies doctor-recommended care.
Meanwhile, House members of both parties recently supported a , a Florida Democrat, to block funding for the pilot in the fiscal 2026 budget of the Department of Health and Human Services.
AI in health care is here to stay, Murphy said, but it remains to be seen whether the WISeR pilot will save Medicare money or contribute to the problems already posed by prior authorization.
“This is a pilot, and I’m open to see what’s going to happen with this,” Murphy said, “but I will always, always err on the side that doctors know what’s best for their patients.”
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/aging/ai-medicare-prior-authorization-trump-pilot-program-wiser/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2091468&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>McGing, calling on behalf of his son, had an in-the-weeds question: how to prevent overpayments that the federal government might later claw back. His call was intercepted by an artificial intelligence-powered chatbot.
No matter what he said, the bot parroted canned answers to generic questions, not McGing’s obscure query. “If you do a key press, it didn’t do anything,” he said. Eventually, the bot “glitched or whatever” and got him to an agent.
It was a small but revealing incident. Unbeknownst to McGing, a former Social Security employee in Maryland, he had encountered a technological tool recently introduced by the agency. Former officials and longtime observers of the agency say the Trump administration rolled out a product that was tested but deemed not yet ready during the Biden administration.
“With the new administration, they’re just kind of like, let’s go fast and fix it later, which I don’t agree with, because you are going to generate a lot of confusion,” said Marcela Escobar-Alava, who served as Social Security’s chief information officer under President Joe Biden.
Some 74 million people ; 11 million of those receive disability payments. In a , more than a third of recipients said they wouldn’t be able to afford such necessities as food, clothing, or housing without it. And yet the agency has been shedding the employees who serve them: Some 6,200 have left the agency, its commissioner , and critics in Congress and elsewhere say that’s led to worse customer service, despite the agency’s efforts to build up new technology.
Take the new phone bot. At least some beneficiaries don’t like it: Social Security’s is, from time to time, pockmarked with negative reviews of the uncooperative bot, as the agency said in July that are handled by the bot.
Lawmakers and former agency employees worry it foreshadows a less human Social Security, in which rushed-out AI takes the place of pushed-out, experienced employees.
Anxieties Across Party Lines
Concern over the direction of the agency is bipartisan. In May, a group of House Republicans expressing support for government efficiency, but cautioning that their constituents had criticized the agency for “inadequate customer service” and suggesting that some measures may be “overly burdensome.”
The agency’s commissioner, Frank Bisignano, a former Wall Street executive, is a tech enthusiast. He has a laundry list of initiatives on which to spend the $600 million in new tech money in the Trump administration’s fiscal 2026 budget request. He’s gotten testy when asked whether his plans mean he’ll be replacing human staff with AI.
“You referred to SSA being on an all-time staffing low; it’s also at an all-time technological high,” he snapped at one Democrat in a House hearing in late June.
But former Social Security officials are more ambivalent. In interviews with ºÚÁϳԹÏÍø News, people who left the agency — some speaking on the condition of anonymity for fear of retribution from the Trump administration and its supporters — said they believe the new administration simply rushed out technologies developed, but deemed not yet ready, by the Biden administration. They also said the agency’s firing of thousands of employees resulted in the loss of experienced technologists who are best equipped to roll out these initiatives and address their weaknesses.
“Social Security’s new AI phone tool is making it even harder for people to get help over the phone — and near impossible if someone needs an American Sign Language interpreter or translator,” Sen. Elizabeth Warren (D-Mass.) told ºÚÁϳԹÏÍø News. “We should be making it as easy as possible for people to get the Social Security they’ve earned.”
Spokespeople for the agency did not reply to questions from ºÚÁϳԹÏÍø News.
Using AI to automate customer service is one of the buzziest businesses in Silicon Valley. In theory, the new breed of artificial intelligence technologies can smoothly respond, in a human-like voice, to just about any question. That’s not how the Social Security Administration’s bot seems to work, with users reporting canned, unrelated responses.
The Trump administration has eliminated some online statistics that obscure its true performance, said Kathleen Romig, a former agency official who is now director of Social Security and disability policy at the left-leaning Center on Budget and Policy Priorities. The old website showed that most callers waited two hours for an answer. Now, the website doesn’t show waiting times, either for phone inquiries (once callback wait time is accounted for) or appointment scheduling.
While statistics are being posted that show beneficiaries receive help — that is, using the AI bot or the agency’s website to accomplish tasks like getting a replacement card — Romig said she thinks it’s a “very distorted view” overall. Reviews of the AI bot are often poor, she said.
Agency leaders and employees who first worked on the AI product during the Biden administration anticipated those types of difficulties. Escobar-Alava said they had worked on such a bot, but wanted to clean up the policy and regulation data it was relying on first.
“We wanted to ensure the automation produced consistent and accurate answers, which was going to take more time,” she said. Instead, it seems the Trump administration opted to introduce the bot first and troubleshoot later, Escobar-Alava said.
Romig said one former executive told her that the agency had used canned FAQs without modifications or nuances to accommodate individual situations and was monitoring the technology to see how well it performed. Escobar-Alava said she has heard similarly.
Could Automation Help?
To Bisignano, automation and web services are the most efficient ways to assist the program’s beneficiaries. In a , he said that agency leaders “are transforming SSA into a digital-first agency that meets customers where they want to be met,” making changes that allow the vast majority of calls to be handled either in an automated fashion or by having a human return the customer’s call.
Using these methods also relieves burdens on otherwise beleaguered field offices, Bisignano wrote.
Altering the phone experience is not the end of Bisignano’s tech dreams. The agency asked Congress for in additional funding for investments, which he intends to use for online scheduling, detecting fraud, and much more, according to a list submitted to the House in late June.
But outside experts and former employees said Bisignano overstated the novelty of the ideas he presented to Congress. The agency has been updating its technology for years, but that does not necessarily mean thousands of its workers are suddenly obsolete, Romig said. It’s not bad that the upgrades are continuing, she said, but progress has been more incremental than revolutionary.
Some changes focus on spiffing up the agency’s public face. Bisignano told House lawmakers that he oversaw a redesign of the agency’s performance-statistics page to emphasize the number of automated calls and deemphasize statistics about call wait times. He called the latter stats “discouraging” and suggested that displaying them online might dissuade beneficiaries from calling.
Warren said Bisignano has since told her privately that he would allow an “inspector general audit” of their customer-service quality data and pledged to make a list of performance information publicly available. The agency has since updated its performance statistics page.
Other changes would come at greater cost and effort. In April, the agency rolled out a security authentication program for direct deposit changes, requiring beneficiaries to verify their identity in person if what the agency described in regulatory documents as an “automated” analysis system detects anomalies.
According to the proposal, the agency estimated about 5.8 million beneficiaries would be affected — and that it would cost the federal government nearly $1.2 billion, mostly driven by staff time devoted to assisting claimants. The agency is asking for nearly $7.7 billion in the upcoming fiscal year for payroll overall.
Christopher Hensley, a financial adviser in Houston, said one of his clients called him in May after her bank changed its routing number and Social Security stopped paying her, forcing her to borrow money from her family.
It turned out that the agency had flagged her account for fraud. Hensley said she had to travel 30 minutes to the nearest Social Security office to verify her identity and correct the problem.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/aging/social-security-chatbot-customer-complaints-glitches/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2079454&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>The initiative, dubbed the Million Veteran Program, is a “crown jewel of the country,” said David Shulkin, a physician who served as VA secretary during the first Trump administration. Data from the project has contributed to research on the genetics of anxiety and peripheral artery disease, for instance, and has resulted in hundreds of published papers. Researchers say the repository has the potential to help answer health questions not only specific to veterans — like who is most vulnerable to post-service mental health issues, or why they seem more prone to cancer — but also relevant to the nation as a whole.
“When the VA does research, it helps veterans, but it helps all Americans,” Shulkin said in an interview.
Researchers now say they fear the program is in limbo, jeopardizing the years of work it took to gather the veterans’ genetic data and other information, like surveys and blood samples.
“There’s sort of this cone of silence,” said Amy Justice, a Yale epidemiologist with a VA appointment as a staff physician. “We’ve got to make sure this survives.”
Genetic data is enormously complex, and analyzing it requires vast computing power that VA doesn’t possess. Instead, it has relied on a partnership with the Energy Department, which provides its supercomputers for research purposes.
In late April, VA Secretary Doug Collins disclosed to Sen. Richard Blumenthal, the top Democrat on the Senate Veterans’ Affairs Committee, that agreements authorizing use of the computers for the genomics project remained unsigned, with some expiring in September, according to materials shared with ºÚÁϳԹÏÍø News by congressional Democrats.
Spokespeople for the two agencies did not reply to multiple requests for comment. Other current and former employees within the agencies — who asked not to be identified, for fear of reprisal from the Trump administration — said they don’t know whether the critical agreements will be renewed.
One researcher called computing “a key ingredient” to major advances in health research, such as the discovery of new drugs.
The agreement with the Energy Department “should be extended for the next 10 years,” the researcher said.
The uncertainty has caused “incremental” damage, Justice said, pointing to some Million Veteran Program grants that have lapsed. As the year progresses, she predicted, “people are going to be feeling it a lot.”
Because of their military experience, maintaining veterans’ health poses different challenges compared with caring for civilians. The program’s examinations of genetic and clinical data allow researchers to investigate questions that have bedeviled veterans for years. As examples, Shulkin cited “how we might be able to better diagnose earlier and start thinking about effective treatments for these toxic exposures” — such as to burn pits used to dispose of trash at military outposts overseas — as well as predispositions to post-traumatic stress disorder.
“The rest of the research community isn’t likely to focus specifically” on veterans, he said. The VA community, however, has delivered discoveries of importance to the world: have won Nobel Prizes, and the agency created the first pacemaker. Its efforts also helped ignite the boom in GLP-1 weight loss drugs.
Yet turbulence has been felt throughout VA’s research enterprise. Like other government scientific agencies, it’s been buffeted by layoffs, contract cuts, and canceled research.
“There are planned trials that have not started, there are ongoing trials that have been stopped, and there are trials that have fallen apart due to staff layoffs — yes or no?” said Sen. Patty Murray (D-Wash.), pressing Collins in a May hearing of the Senate Veterans’ Affairs Committee.
The agency, which has a budget of roughly $1 billion for its research arm this fiscal year, has slashed infrastructure that supports scientific inquiry, according to documents shared with ºÚÁϳԹÏÍø News by Senate Democrats on the Veterans’ Affairs Committee. It has canceled at least 37 research-related contracts, including for genomic sequencing and for library and biostatistics services. The department has separately canceled four contracts for cancer registries for veterans, creating potential gaps in the nation’s statistics.
Job worries also consume many scientists at the VA.
According to agency estimates in May, about 4,000 of its workers are on term limits, with contracts that expire after certain periods. Many of these individuals worked not only for the VA’s research groups but also with clinical teams or local medical centers.
When the new leaders first entered the agency, they instituted a hiring freeze, current and former VA researchers told ºÚÁϳԹÏÍø News. That prevented the agency’s research offices from renewing contracts for their scientists and support staff, which in previous years had frequently been a pro forma step. Some of those individuals who had been around for decades haven’t been rehired, one former researcher told ºÚÁϳԹÏÍø News.
The freeze and the uncertainty around it led to people simply departing the agency, a current VA researcher said.
The losses, the individual said, include some people who “had years of experience and expertise that can’t be replaced.”
Preserving jobs — or some jobs — has been a congressional focus. In May, after inquiries from Sen. Jerry Moran, the Republican who chairs the Veterans’ Affairs Committee, about staffing for agency research and the Million Veteran Program, Collins wrote in a letter that he was extending the terms of research employees for 90 days and developing exemptions to the hiring freeze for the genomics project and other research initiatives.
Holding jobs is one thing — doing them is another. In June, at the annual research meeting of AcademyHealth — an organization of researchers, policymakers, and others who study how U.S. health care is delivered — some VA researchers were unable to deliver a presentation touching on psychedelics and mental health disparities and another on discrimination against LGBTQ+ patients, Aaron Carroll, the organization’s president, told ºÚÁϳԹÏÍø News.
At that conference, reflecting a trend across the federal government, researchers from the Centers for Medicare & Medicaid Services and the Agency for Healthcare Research and Quality also dropped out of presenting. “This drop in federal participation is deeply concerning, not only for our community of researchers and practitioners but for the public, who rely on transparency, collaboration, and evidence-based policy grounded in rigorous science,” Carroll said.
We’d like to speak with current and former personnel from the Department of Health and Human Services or its component agencies who believe the public should understand the impact of what’s happening within the federal health bureaucracy. Please message ºÚÁϳԹÏÍø News on Signal at (415) 519-8778 or .
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/military-genetic-database-million-veterans-dna-health-research-trump-va/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2059500&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>“That’s not part of the job of our employees or our tech supports,” said Ruth Elio, an occupational nurse who supervised the center’s workers when she spoke with ºÚÁϳԹÏÍø News last year. “Still, they’re doing that because it is important.”
Elio also helped workers with their own health problems, most frequently headaches or back pains, borne of a life of sitting for hours on end.
In a different call center, Kevin Asuncion transcribed medical visits from half a world away, in the United States. You can get used to the hours, he said in an interview last year: 8 p.m. to 5 a.m. His breaks were mostly spent sleeping; not much is open then.
Health risks and night shifts aside, call center workers have a new concern: artificial intelligence.
Startups are marketing AI products with lifelike voices to schedule or cancel medical visits, refill prescriptions, and help triage patients. Soon, many patients might initiate contact with the health system not by speaking with a call center worker or receptionist, but with AI. Zocdoc, the appointment-booking company, has introduced an automated assistant it says can schedule visits without human intervention 70% of the time.
The medically focused call center workforce in the Philippines is a vast one: 200,000 at the end of 2024, estimates industry trade group leader Jack Madrid. That figure is more than the number of paramedics in the United States at the end of 2023, according to the Bureau of Labor Statistics. And some employers are opening outposts in other countries, like India, while using AI to reshape or replace their workforces.
Still, it’s unclear whether AI’s digital manipulations could match the proverbial human touch. For example, a in Nature Medicine found that while some models can diagnose maladies when presented with a canned anecdote, as prospective doctors do in training, AI struggles to elicit information from simulated patients.
“The rapport, or the trust that we give, or the emotions that we have as humans cannot be replaced,” Elio said.
Sachin Jain, president and CEO of Scan Health Plan, an insurer, said humans have context that AI doesn’t have — at least for now. A receptionist at a small practice may know the patients well enough to pick up on subtle cues and communicate to the doctor that a particular caller is “somebody that you should see, talk to, that day, that minute, or that week.”
The turn toward call centers, while creating more distance between a caller and a health provider, preserved the human touch. Yet some agents at call centers and their advocates say the ways they are monitored on the job undermine care. At one Kaiser Permanente location, it’s a “very micromanaging environment,” said one nurse who asked not to provide her name for fear of reprisal.
“From the beginning of the shift to your end, you’re expected to take call after call after call from an open queue,” she said. Even when giving advice for complex cases, “there’s an unwritten rule on how long a nurse should take per call: 12 minutes.”
Meanwhile, the job is getting tougher, she said. “We’re the backup to the health care system. We’re open 24/7,” she said. “They’re calling about their incision sites, which are bleeding. Their child has asthma, and the instructions for the medications are not clear.”
One nurses union is protesting a potential AI management tool in the call centers.
“AI tools don’t make medical decisions,” Kaiser Permanente spokesperson Vincent Staupe told ºÚÁϳԹÏÍø News. “Our physicians and care teams are always at the center of decision-making with our patients and in all our care settings, including call centers.”
Kaiser Permanente is not affiliated with KFF, a health information nonprofit that includes ºÚÁϳԹÏÍø News.
Some firms cite 30% to 50% turnover rates — stats that some say make a case for turning over the job to AI.
Call centers “can’t keep people, because it’s just a really, really challenging job,” said Adnan Iqbal, co-founder and CEO of Luma Health, which creates AI products to automate some call center work. No wonder, “if you’re getting yelled at every 90 seconds by a patient, insurance company, a staff member, what have you.”
To hear business leaders tell it, their customers are frustrated: Instead of the human touch, patients get nothing at all, stymied by long wait times and harried, disempowered workers.
One time, Marissa Moore — an investor at OMERS Ventures — got a taste of patients’ frustrations when trying to schedule a visit by phone at five doctors’ offices. “In every single one, I got a third party who had no intel on providers in the office, their availability, or anything.”
These types of gripes are increasingly common — and getting the attention of investors and businesses.
Customer complaints are hitting the bottom lines of businesses — like health insurers, which can be rewarded by the federal government’s Medicare Advantage policies for better customer service.
When Scan noticed a drop in patient ratings for some of the medical providers in its insurance network, it learned those providers had switched to using centralized call centers. Customer service suffered, and the lower ratings translated into lower payments from the federal government, Jain said.
“There’s a degree of dissatisfaction that’s bubbling up among our patients,” he said.
So, for some businesses, the notion of a computer receptionist seems a welcome solution to the problem of ineffectual call centers. AI voices, which can convincingly mimic human voices, are “beyond uncanny valley,” said Richie Cartwright, the founder of Fella, a weight loss startup that used one AI product to call pharmacies and ask if they had GLP-1s in stock.
Prices have dropped, too. Google AI’s per-use price has dropped by 97%, company CEO Sundar Pichai .
Some boosters are excited to put the vision of AI assistants into action. Since the second Trump administration took office, policy initiatives by the quasi-agency known as the Department of Government Efficiency, led by Elon Musk, have using artificial intelligence bots for customer service at the Department of Education.
Most executives interviewed by ºÚÁϳԹÏÍø News — in the hospital, insurance, tech, and consultancy fields — were keen to emphasize that AI would complement humans, not replace them. Some resorted to jargon and claimed the technology might make call center nurses and employees more efficient and effective.
But some businesses are signaling that their AI models could replace human workers. Their websites hint at reducing reliance on staff. And they are developing pricing strategies based on reducing the need for labor, said Michael Yang, a venture capitalist at OMERS.
Yang described the prospect for businesses as a “we-share-in-the-upside kind of thing,” with startups pitching clients on paying them for the cost of 1½ hires and their AI doing the work of twice that number.
But providers are building narrow services at the moment. For example, the University of Arkansas for Medical Sciences started with a limited idea. The organization’s call center closes at 5 p.m. — meaning patients who try to cancel appointments after hours left a phone message, creating a backlog for workers to address the next morning that took time from other scheduling tasks and left canceled appointments unfilled. So they started by using an AI system provided by Luma Health to allow after-hours cancellations and have since expanded it to allow patients to cancel appointments all day.
Michelle Winfeld-Hanrahan, the health system’s chief clinical access officer, who oversees its deployment, said UAMS has plenty of ideas for more automation, including allowing patients to check on prior authorizations and leading them through post-discharge follow-up.
Many executives claim AI tools can complement, rather than replace, humans. One company says its product can measure “vocal biomarkers” — subtle changes in tone or inflection — that correlate with disease and supply that information to human employees interacting with the patient. Some firms are using large language models to summarize complex documents: pulling out obscure insurance policies, or needed information, for employees. Others are interested in AI guiding a human through a conversation.
Even if the technology isn’t replacing people, it is reshaping them. AI can be used to change humans’ behavior and presentation. Call center employees said in interviews that they knew of, or had heard omnipresent rumors of, or feared, a variety of AI tools.
At some Kaiser Permanente call centers, unionized employees protested — and successfully delayed — the implementation of an AI tool meant to measure “active listening,” a union flyer claimed.
And employees and executives associated with the call center workforce in the Philippines said they’d heard of other software tools, such as technology that changed Filipino accents to American ones. There’s “not a super huge need for that, given our relatively neutral accents, but we’ve seen that,” said Madrid, the trade group leader.
“Just because something can be automated doesn’t mean it should be,” he said.
ºÚÁϳԹÏÍø News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about .This <a target="_blank" href="/health-industry/ai-call-centers-medical-receptionists-replaced-bots-human-touch-unions/">article</a> first appeared on <a target="_blank" href="">KFF Health News</a> and is republished here under a <a target="_blank" href=" Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src="/wp-content/uploads/sites/8/2023/04/kffhealthnews-icon.png?w=150" style="width:1em;height:1em;margin-left:10px;">
<img id="republication-tracker-tool-source" src="/?republication-pixel=true&post=2036569&ga4=G-J74WWTKFM0" style="width:1px;height:1px;">]]>