- Healthbeat
- Posts
- AI can be a transformative health tool. It can also cause real harm.
AI can be a transformative health tool. It can also cause real harm.
As AI systems grow more capable, many experts are wondering whether the risks outweigh the benefits. Some of those risks are distant, but catastrophic. Others are happening right now at a smaller scale.
Hello and welcome to Healthbeat’s weekly report on stories shaping public health in the United States.
I am Dr. Jay K. Varma, a physician, epidemiologist, and public health expert currently serving as chief medical officer at Fedcap, a national nonprofit focused on economic mobility and well-being for vulnerable communities. Views expressed here are my own.
This week, I’m focusing on a question that sounds like science fiction but is increasingly being asked in serious policy circles: Will artificial intelligence kill us all?
AI can help health agencies avert illness and death
Let’s start with my bias. I believe that AI is a transformative tool that can help public health agencies avert illness and death in their communities if used responsibly and ethically.
It can help epidemiologists process, analyze, and interpret surveillance data. It can assist health officials in tailoring communications, particularly in emergencies, and improving programs for immunizations, sexually transmitted infections, tuberculosis, and maternal child health.
As AI systems grow more capable, many experts are wondering, however, whether the risks could outweigh the benefits. Some of those risks are distant but catastrophic. Others are happening right now at a smaller scale.
An AI safety leader warns the ‘world is in peril’
Earlier this month, a senior safeguards researcher at the AI company Anthropic resigned, warning that the “world is in peril.” In his resignation letter, he cited concerns about AI, bioweapons, and interconnected global crises.
Anthropic has positioned itself as a safety-oriented firm in the race to build increasingly powerful generative AI systems. The New Yorker just published an in-depth investigation into Anthropic and the inherent tensions people in the company feel trying to build advanced AI systems while safeguarding against risks to humanity.
Curiously, the U.S. Defense Department just took the view that Anthropic may be a risk to the federal government, because it’s too focused on ethics and protecting human health.
(Full disclosure: I’m a big fan of Claude Code, one of Anthropic’s premier products. Here’s an excellent review of that product by widely respected AI expert Ethan Mollick.)
The departing Anthropic researcher had led work on reducing risks from AI-assisted bioterrorism and on understanding how AI assistants could distort human judgment. His concerns are shared by others in the field.
As AI models become more proficient in synthesizing scientific literature and solving technical problems, they may lower the barrier to designing or modifying pathogens. Biology is already digitized. Genome sequences are publicly available. Laboratory protocols are published in journals and databases.
Recent research has tested advanced AI models on graduate-level virology problems, demonstrating that they can perform at or near expert level on tasks involving viral genetics and pathogenesis. While such capabilities can help humanity by improving vaccine design or antiviral discovery, they could also be misused to enhance the infectiousness or immune evasion of pathogens.
Having led outbreak responses in Asia, Africa, and the United States, I still believe that we need to worry most about nature — “spillover” of viruses from animals into humans — when trying to prevent the next pandemic.
Nevertheless, the pace of AI development is extraordinary, which means my risk assessment is changing daily as well. An intentionally engineered pathogen designed for maximum spread could have far greater consequences than even the Covid pandemic.
Proposals to reduce biosecurity risks
Researchers and policymakers have outlined approaches to mitigate risk.
One proposal is to secure closed-source biological AI models, restricting access and subjecting users to rigorous vetting and monitoring. Companies can conduct exercises in which they deliberately probe their own systems for vulnerabilities before malicious actors do.
A second approach is to protect high-risk biological datasets from being used to fine-tune open-source models. If detailed genomic data on high-consequence pathogens or advanced laboratory protocols are easily accessible for model training, they can amplify the capabilities of otherwise general-purpose systems. Tighter access controls and clearer publication norms could reduce misuse.
A third area is to restrict AI agents that interface with biological tools. As AI systems move beyond generating text to interacting with laboratory equipment or ordering synthetic DNA, systems need to be in place to ensure there is a human that is screening DNA synthesis orders and auditing AI-designed workflows.
In the AI era, biosecurity will require multiple layers of defense that protect humanity at the level of algorithms, data, and laboratory infrastructure.
The near-term mental health risks
While headlines often focus on existential threats, I am most concerned at the moment about a more immediate hazard: AI’s impact on mental health and suicide.
Millions of people now interact daily with AI “companion” chatbots designed to simulate friendship, empathy, or therapeutic dialogue. Some users, including minors, disclose suicidal thoughts to these systems. In multiple lawsuits, there are allegations that AI chatbot interactions led to people committing suicide.
In response, California enacted Senate Bill 243 in 2025, the first law in the nation regulating AI companion chatbots. The law requires clear disclosure when a user is interacting with AI rather than a human, mandates protocols for responding to suicidal ideation, and imposes additional rules for known minors, including periodic reminders that the chatbot is not human and restrictions on sexually explicit content. It also creates a private right of action, allowing civil lawsuits for violations.
California’s law is an important milestone. It acknowledges that emotionally realistic AI systems can foster attachment and dependence, particularly among adolescents. It also recognizes that AI systems need regulation to ensure they do not severely harm mental health and promote suicidal thinking.
We do not know if these regulations, however, will even work. Disclosure that a system is “not human” may not counteract emotional realism. Protocol requirements do not guarantee effectiveness. Operators may avoid collecting information about users’ ages, limiting their obligations to minors.
From a public health perspective, these debates remind me of the challenges with social media platforms. Almost 16 years after Instagram was released, we are now possibly seeing the company be held legally liable for its impact on mental health.
Last week, Mark Zuckerberg, CEO of Meta, had to testify in court about whether Instagram (which Meta owns) was designed to addict and harm teenagers. Social media technologies optimized for engagement have amplified loneliness, misinformation, and depression, and AI systems could likely do the same at an ever-larger scale.
Evaluating the uncertainty
Back to the original question: Will AI kill us all? I believe the probability of an extinction-level event is low today, but low is not zero, and the consequences would be catastrophic. At the same time, the mental health risks of emotionally persuasive AI systems have already appeared and need to be addressed.
Public health agencies must always prepare for rare catastrophic events while addressing everyday harms. Now, with AI, we must do both simultaneously as well.
ICYMI
Here’s a recap of the latest reporting from Healthbeat:
AI in global health: AI therapy can help in places with scarce care, but experts urge regulation
📰 Sign up here to get Healthbeat’s weekly Global Health Checkup in your inbox a day early.
NYC funding: Health leaders, advocates applaud partial restoration of Article 6 funding for NYC public health services
Vaccines: Controversial CDC vaccine panel cancels February meeting. Lack of public agenda violates rules.
Until next week,
Jay
Thumbnail image by Getty Images
Looking for your next read? Check out these other great newsletters.
|
|
|
|



Reply