Paul Mastoridis: How to Integrate AI Into Medical Strategy Without Losing Credibility

Artificial intelligence (AI) is already shaping how clinicians detect disease patterns, monitor adherence, and interpret patient behavior in chronic conditions such as asthma and chronic obstructive pulmonary disease (COPD). For all of AI’s promise, however, medicine still operates on trust. Dr. Paul Mastoridis, Pharmaceutical/Medtech Executive with more than 25 years of experience across drug development, medical affairs, and digital health innovation, says that without credibility, even the most advanced model risks becoming irrelevant.

His work developing AI-enabled respiratory solutions and machine-learning diagnostics has reinforced a principle that many healthcare organizations are only beginning to understand. “Credibility is the currency,” Mastoridis says. “AI earns it only when it respects how clinicians actually work.” As pharmaceutical companies, healthcare systems, and early-stage companies race to integrate AI into medical strategy, it’s an important principle to remember. The challenge is not whether AI can accelerate workflows but whether it can do so without undermining the clinical rigor medicine depends on.

Explainability Matters More Than Speed

Mastoridis believes AI loses credibility the moment it ignores the realities of patient care. In respiratory medicine, where outcomes are shaped by inhaler adherence, symptom variability, environmental triggers, and patient behavior, clinicians are quick to question systems that cannot explain their reasoning. “Respiratory clinicians don’t reject AI because it’s new,” he says. “They reject it when it gives answers without showing its reasoning. A model that flags poor asthma control but cannot distinguish whether the issue stems from inhaler misuse, inconsistent medication adherence, or comorbidities risks appearing unreliable.

Mastoridis saw this firsthand while developing a machine-learning diagnostic model designed to differentiate asthma from COPD. The breakthrough was not simply the model’s predictive capability, but the decision to make the logic visible to clinicians. “When we showed pulmonologists which symptom clusters, medication histories, and respiratory patterns drove the model’s classification, they understood it,” he says. “When we simply presented a prediction, they pushed back.”

Validation also played a defining role. The model was tested against real-world patients with incomplete histories, overlapping phenotypes, and inconsistent spirometry measurements. “Clinicians don’t need AI to be perfect,” Mastoridis says. “They need it to be clinically honest.”

Building Medical Affairs Teams That Use AI Responsibly

As AI becomes more embedded in healthcare operations, Mastoridis believes the organizations seeing the greatest value are not necessarily the most technologically advanced. They are the ones integrating AI into practical decision-making, while maintaining human oversight.

Medical Science Liaisons may use AI tools to identify regional inhaled corticosteroid adherence patterns before engaging key opinion leaders. Medical directors can monitor spikes in rescue inhaler use and escalate potential safety concerns earlier than traditional systems allow. Teams may rely on AI to summarize updated Global Initiative for Chronic Obstructive Lung Disease (GOLD) or Global Initiative for Asthma (GINA) guidelines, but every clinical claim still requires human validation. “The team doesn’t outsource judgment to algorithms,” Mastoridis says. “They use AI to see the field faster and clearer.” AI is becoming less about replacing expertise and more about augmenting it. The organizations that succeed are likely to be the ones that preserve accountability, while improving efficiency.

The Hidden Risks of Generative AI in Healthcare

Generative AI has introduced another layer of complexity. While healthcare companies increasingly use large language models (LLM) to accelerate administrative and content-related work, Mastoridis says that subtle inaccuracies can quickly create reputational and clinical risk. AI can be highly effective at reducing manual workload through literature summaries, protocol drafts, and adherence trend reports. Problems emerge when systems generate clinically incorrect information that appears authoritative. “If AI summarizes a COPD study but misstates whether the population was GOLD A or GOLD B, that’s a credibility hit,” Mastoridis says. “If it confuses rescue use with exacerbation rate, it undermines trust.”

The same applies to patient-facing education. Oversimplified explanations of inhaler technique or respiratory management can unintentionally mislead patients, despite appearing polished and professional. “In respiratory medicine, precision is the brand,” he says. “AI can accelerate, but it cannot author.” Mastoridis has become particularly vocal about the tendency to overtrust AI outputs without sufficient scrutiny and recently challenged an AI platform that repeatedly delivered inaccurate information despite publicly available evidence contradicting it. “You have to question AI,” he says. “People believe what AI tells them, even though AI doesn’t always have the full picture.”

Medical Leaders Are Becoming Real-Time Decision Makers

The next phase of AI integration may fundamentally reshape the role of medical leaders themselves. With the FDA piloting real-time AI clinical trials, Mastoridis expects medical affairs teams to move from retrospective reviewers of evidence to active operational decision-makers during trial execution. If AI systems detect that a subgroup of COPD patients is responding differently during a study, inclusion criteria or dosing adjustments could happen mid-trial rather than months later. If environmental factors trigger rising asthma symptoms in a patient cohort, medical teams may intervene before the issue escalates into a broader safety concern.

“Medical leaders will shift from reviewing evidence to steering it in real time,” Mastoridis says. This will require leaders who understand both the science and the systems driving it, as technical literacy alone will not be enough. Organizations will need executives capable of balancing innovation with clinical accountability, regulatory awareness, and patient trust.

Follow Paul Mastoridis on LinkedIn or visit his website for more insights.

You May Also Like