Ethics and AI adoption
AI has entered the pharmacy—not with a white coat and clipboard but with code, algorithms and a growing list of ethical questions that don’t come with an easy set of answers.
Among the ethical concerns to contend with are patient data privacy, cybersecurity, AI misinformation, bias, liability and job displacement.
For retail pharmacies, there is low-hanging fruit that can make operations more efficient and free up staff to focus on the still-vital aspect of operations around the human connection with consumers and patients.
“Inventory management and pricing is a no-brainer,” said Lisa Schwartz, PharmD, senior director of professional affairs at the National Community Pharmacists Association. “Ordering or checking the status of a refill should be automated as much as possible. Data from a point-of-sale system or wholesaler could give insights on individual product or category turns and adjust the seasonal plan.”
Ready for the first step?
Pharmacy owners are just starting to consider adopting AI to automate or simplify business processes. It’s a big step to take, but as the saying goes: By the end of the decade there might be two kinds of business—those that use AI, and those that are out of business.
[Related: Retailers put AI to work]
“We are still scoping out how to best incorporate this into our pharmacy work-flows and systems,” said Jeremy Faulks, vice president of pharmacy at Thrifty White Pharmacy, a chain of 88 pharmacies serving the upper Midwest. He said he “thinks it’ll be helpful” and is working on tech integration.
Part of that promise is AI’s ability to make predictions. Think of the ability to predict patient traffic throughout the day so managers can better plan for the day or week from staffing and inventory management perspectives.
Using data from manufacturers, wholesalers and public demand, AI can help predict drug shortages so that pharmacists may support patients by switching medications ahead of a shortage, saving time and resources later.
It can also predict risk for medication errors based upon an individual’s experience and the number of patient interactions in the hour or hours before.
New tools can accommodate patient communication preferences—by method (like phone or text message, for example) or by time of day. Other tools could help the pharmacy manage inventory, keep prices for OTCs and prescriptions competitive, or assure proper documentation and coding for medical billing or value-based contracts.
Which all sounds great. But watch out—store managers must act with integrity and intelligence around a host of virtual issues that can affect real-life people.
Patient data and security
Tech offers the promise of efficiencies that can only help the bottom line, but AI systems that handle sensitive health data require heightened vigilance. Ensuring patient consent and maintaining HIPAA compliance are paramount. Missteps can not only breach trust but could also result in legal consequences.
“Applications in clinical areas are emerging more slowly while pharmacists evaluate how professional liability and health data privacy are affected,” said Schwartz. The most important step in the process of developing an AI strategy is to just treat it like any other system that handles sensitive information. For internal company-approved AI systems, continue to use strong passwords, never share login credentials and be cautious when entering personal or health data.
“Pharmacy teams don’t need to be tech experts to play a critical role in protecting patient data,” said Dennen Monks, field tech strategist at CrowdStrike, a leading data security firm. “AI can be a helpful assistant, but like any technology it needs to be used carefully and with clear guardrails.”
Monks told DSN of a range of threats pharmacy managers should keep in mind. “Threat actors,” as he calls them, can trick AI systems by feeding them bad information, interfering with how they make decisions, or slipping in harmful data that can put important systems at risk. Attacks can expose sensitive information, make the AI provide bad advice or incorrectly represent company objectives.
[Related: Does DEI make cents?]
“In a real-world setting like a drug store, that could mean manipulating AI systems used for inventory, scheduling, or even customer interactions,” said Monks. “The risks are real, especially as more critical decisions are delegated to AI. But they’re also manageable. With the right safeguards and threat intelligence, organizations can embrace AI’s benefits while staying protected.”
Vigilance means watching for AI systems that might provide bad information, interfere with how store staff makes decisions, or slip in harmful data that can put important systems at risk. Robust cybersecurity strategies should be implemented, said Monks, in order to protect against ransomware, data theft and operational disruptions.
“If you’re using AI-powered chat or scheduling tools, double-check their suggestions before acting on them,” said Monks. “And if anything seems off, like unusual system behavior or questionable recommendations, report it.”
Can regulations tame the beast?
An October 2023 executive order signed by then-President Biden called for the safe, secure and trustworthy development of AI. It included concerns around healthcare, specifically “when AI use deepens discrimination,” and “where mistakes by or misuse of AI could harm patients ... including through bias or discrimination.”
This Biden executive order was supplanted by a brief one signed by President Trump in January 2025, which called out societal harms around bias but only in the context of “engineered social agendas” and did not specifically mention healthcare.
Although Congress has not passed any legislation, several bills are pending.
For its part, the American Pharmacists Association’s House of Delegates in 2024 passed a suite of policies regarding the “judicious” use of AI, ensuring pharmacist inclusion rather than replacement in pharma-
cy practice.
AI, concluded the policies, must elevate the pharmacist’s role, not replace it. Education, caution, collaboration and transparency will ensure AI supports both patient care and professional dignity.
These APhA policy prescriptions included:
- ¤ Using pharmacists in designing AI programs.
- ¤ Using programs to elevate the practice and enhance patient care.
- ¤ Ensure patient safety and privacy.
- ¤ Mitigate bias and misinformation
- ¤ User training in the lawful, ethical and clinical use.
Bias and misinformation
Here’s what really concerns anyone using AI: that bots get their information from the world wide web. Remember the old saying to only trust half of what you read?
That was in the golden era of professional publications with journalists, editors and fact-checkers. But now these large language models are pulling information from sources that are not always quite accurate. You might say they know just enough to be dangerous.
And what may be worse than flagrant violations or occasional obvious mistruths are subtle forms of institutional bias. This bias is baked into the cake because much historic published human clinical research was conducted on Caucasian males. This is starting to change to include women, less so for ethnic minorities and those groups not well represented in medical studies.
For instance, women and Black people are more likely to suffer adverse drug effects compared to the white men who have been studying subjects for decades, results of which inform AI systems. However, the emerging studies that demonstrate this are not always enough to inform systems that inform practitioners who prescribe medicines to patients.
“AI tools have to learn from data, and data are produced in the real world. That means they learn about what does happen in an inefficient, error-prone and unfair health system, which is often a far cry from what should happen,” said Ziad Obermeyer, MD, a physician and researcher who works at the intersection of machine learning and health. Time magazine called him one of the 100 most influential people in AI.
Obermeyer advocates that AI tools have to be developed and evaluated with the utmost of caution. For pharmacies looking to use AI as a tool in operations, one question every store manager should ask an AI vendor is what kind of evaluation has been conducted to make sure the algorithm works as expected.
“It is important to recognize that AI models do contain bias,” said Brigid Groves, vice president of professional affairs at the American Pharmacists Association. “So before the data is used to make decisions or process change, you need to understand the model that created it.”
That means work on the front end to make sure AI tools are aligned with ethical care standards. It means ensuring AI tools are not unintentionally delivering different care to different kinds of customers.
Pharmacies must scrutinize AI inputs and training data to avoid perpetuating health inequities.
“Collaborations between pharmacy professionals, AI developers and other healthcare professionals can enhance AI programs and tools to support pharmacists,” said Groves. That fear around bias and misinforma-
tion can go straight to consumers as they engage with the healthcare system.
Already, insurance giant Cigna has been subject to a class-action lawsuit alleging its AI algorithm, PxDx, is rejecting claims at a rate of 1.2 seconds per claim. These original healthcare decisions are made by actual doctors whereas the claims rejections derive from bots. TikTok abounds with videos explaining how “AI denied my medical claim.” Good luck getting that claim denial overturned.
With such automatic healthcare decisions being made by AI, is it any wonder consumers should be concerned about this brave new world? That concern should also go straight to your drug store practice.
Is AI coming for your job?
As automation takes over tasks traditionally done by pharmacists or techs, fears of job loss must be addressed. All across the world, people are concerned that robots are coming for their jobs. Are those concerns founded? “Some may see it as a threat,” said Groves, “while others view it as a tool to increase capacity so that pharmacists may expand the scope of the offerings at their location.”
Even in these heady, early days of AI adoption, people are seeing that AI is a tool, a co-pilot, one that still needs human input. Thoughtful workplace planning, even retraining in some situations, is a key management consideration. Transparency about shifting roles and upskilling opportunities is crucial for morale.
[Related: Unjust pharmacy deserts]
“Communicating with staff about how their duties shift to tasks that can’t be automated is important to help people feel valued,” said Schwartz.
Stay human
At the end of the day tech is not here to replace us but to help serve us more efficiently and effectively. The vital human connection cuts both ways. For one, while clinical judgment can be aided by insights delivered by AI, such as recognizing drug-drug interactions and providing warnings, it’s the human doctors and pharmacists who make the final call. These people are also the ones responsible if something goes wrong.
“There is no abdicating professional responsibility,” said Schwartz. “Clinical judgment may be aided by insights served up by AI, but it is ultimately up to the pharmacist.” That means treating AI tools as just that—tools. And at the end of every tool is a human hand, a human brain—and a human heart.
“AI tools are unable to replicate human characteristics,” said Groves, “such as empathy and ethical judgment.” At retail, the human connection means keeping a focus on the patients who rely on pharmacists to diagnose and treat them. That goes beyond the treatment of patients and to the business of engaging with customers and delivering person-centered care.
“As long as patients still perceive they have the power to request a human,” counseled Schwartz, “using AI to interact with patients by phone, text or chatbot can improve satisfaction with the pharmacy.” The success of your business depends on it.