Artificial Intelligence in Clinical Decision-Making: Regulatory Roadmap and Reimbursement Strategies

December 11, 2025

Reading Time : 9 min

Artificial intelligence is rapidly transforming clinical medicine, with AI-powered tools increasingly being used for diagnostic imaging interpretation, clinical decision support, predictive analytics and treatment planning. As healthcare organizations adopt AI technologies and developers bring new products to market, they face an evolving regulatory landscape involving ongoing legislative debate, Food and Drug Administration (FDA) oversight, changes in CMS reimbursement policies, medical liability considerations and clinical integration challenges.

Key Takeaways

  • The FDA has authorized over 1200 AI/ML-enabled medical devices since 1995, with accelerated growth in authorizations for clinical decision support tools in recent years.
  • CMS is developing payment policies for AI-enabled services, including new CPT codes and coverage determinations.
  • Clinical integration of AI raises workflow, liability and physician acceptance challenges that affect adoption rates.
  • AI algorithm bias and health equity concerns are driving regulatory scrutiny and requiring algorithmic fairness assessments.
  • Healthcare stakeholders should develop AI governance frameworks addressing validation, monitoring and clinical oversight.

FDA Regulatory Framework for AI/ML Medical Devices

The FDA regulates AI and machine learning algorithms as medical devices when they are intended to diagnose, treat, mitigate or prevent disease. The device classification also determines the applicable regulatory pathway. Class I devices (lowest risk) may be exempt from premarket review, while Class II devices typically require 510(k) clearance demonstrating substantial equivalence. Class III devices (highest risk) require premarket approval with clinical data. Although many of the AI/ML-supported devices that have been authorized do not involve clinical decision support (CDS), those that are intended for CDS have been most frequently authorized through the 510(k) pathway. The 21st Century Cures Act supports CDS development by clarifying which software is regulated as a medical device and which is not. It excludes certain administrative, electronic health record, CDS, general wellness and health management tools from FDA regulation as a medical device.

Key AI/ML medical device categories receiving FDA authorization include radiology AI analyzing X-rays, CT and MRI for findings like fractures and hemorrhages; cardiology AI interpreting ECGs and detecting arrhythmias; pathology AI assisting with tissue analysis and cancer detection; retinal imaging AI screening for diabetic retinopathy and CDS algorithms predicting sepsis or deterioration.

FDA's Center for Devices and Radiological Health and Digital Health Center of Excellence provide a focal point for AI/ML device regulation and have published guidance on CDS software, predetermined change control plans for adaptive algorithms and good machine learning practices. Additionally, FDA’s Digital Health Center of Excellence recently announced the “Technology-Enabled Meaningful Patient Outcomes (TEMPO) for Digital Health Devices Pilot” in connection with the Center for Medicare and Medicaid Innovation ACCESS model to promote access to certain digital health services while safeguarding patient safety.

In general, it is critical that developers engage FDA early through pre-submission meetings, provide robust validation data demonstrating algorithm performance across diverse populations, address potential algorithmic bias and develop post-market surveillance plans to monitor real-world performance.

Office of the National Coordinator for Health IT and Predictive Decision Support

The U.S. Department of Health and Human Services' Office of the National Coordinator for Health Information Technology (ONC) published a final rule on December 13, 2023, Health Data, Technology and Interoperability: Certification Program Updates, Algorithm Transparency and Information Sharing (HTI-1), which finalizes extensive transparency and risk management requirements for “predictive decisions support interventions (DSI)” which includes predictive tools based on AI. As detailed in our prior alert, the rule appears to subject AI-based software regulated by the FDA as a medical device to additional requirements while also establishing a new Insights Condition and Maintenance of Certification and finalizes other updates to the ONC Health IT Certification Program.

In the rule, predictive DSI means “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produces an output that results in prediction, classification, recommendation, evaluation, or analysis.” ONC’s requirements apply to a wide range of predictive tools, including AI-based tools, and as a result, may impact certain software as a medical device (SaMD) cleared or approved by the FDA that may differ from terms of their clearance and approval. Sponsors of certain SaMDs should assess whether these new requirements create any conflicts with FDA requirements.

CMS Coverage and Reimbursement for AI-Enabled Services

Medicare payment policy for AI-enabled services is evolving as clinical adoption increases. One example includes the 2026 Hospital OPPS Final Rule, which establishes national reimbursement under OPPS for AI-assisted cardiac analysis. The American Medical Association's CPT Editorial Panel has also established several Category I CPT codes for AI-enabled services, including codes for AI-assisted retinal imaging analysis and cardiac imaging interpretation, while additional codes are under consideration as AI applications expand. Additionally, CMS local coverage determinations (LCDs) address AI technologies, establishing coverage criteria including clinical indications, provider qualifications and documentation requirements. CMS is also evaluating AI technologies through its Coverage with Evidence Development pathway, which provides conditional, limited coverage while evidence is collected.

Over the long run, for Medicare payment policies to keep up with the pace of innovation, structural changes are likely needed. With Medicare’s defined benefit categories and different payment systems for different provider types and clinical settings, there is no standard method for covering and paying for every FDA-approved AI-enabled device. Coverage polices are item- or service-specific, and early-adopters of AI often face uncertainty as to whether their item or service will be considered “reasonable and necessary” and therefore eligible for payment. Further, even if a service is covered, existing payment methodologies may not adequately reimburse the AI tools. Congress is exploring ways to eliminate some of these structural barriers to ensure that Medicare beneficiaries get access to AI-driven technologies. One example includes the Health Tech Investment Act introduced earlier this year that seeks to establish Medicare reimbursement pathways for FDA-approved, algorithm-based healthcare services.

Clinical Integration and Workflow Challenges

From a clinical perspective, AI integration into medical practice requires workflow redesign, physician training and careful attention to human-AI interaction. Key integration challenges include alert fatigue where excessive AI-generated alerts lead to clinician desensitization; workflow disruption if AI tools add steps or slow processes; interpretability needs as clinicians must understand AI reasoning to trust recommendations and integration with electronic health records and clinical systems.

Emergency medicine provides illustrative examples of these points. AI algorithms that detect critical findings on head CT scans can expedite stroke treatment if they alert physicians immediately. But if alerts arrive while physicians are managing other critical patients, or if false positive rates are high, the tools may add more burden than benefit.

Physicians’ acceptance of AI varies by specialty and context. Trust increases when algorithms provide explanations of their reasoning, performance metrics are transparent and regularly updated and systems allow physicians to override with appropriate documentation.

Hospitals and health systems should conduct workflow analysis before AI deployment, provide comprehensive training on AI tool use and limitations to their clinicians, establish feedback mechanisms for providers, monitor override rates and continuously evaluate AI impact on clinical efficiency and outcomes.

Algorithmic Bias and Health Equity Considerations

Algorithmic bias—systematic errors in AI predictions that disadvantage particular groups—has emerged as a critical concern. Bias can arise from training data that underrepresent certain populations, outcome definitions that systematically differ across groups or feature selection that incorporates proxies for protected characteristics.

Regulatory responses are emerging both at the federal and state levels. FDA guidance on good machine learning practices emphasizes diverse training data and performance assessment across demographic subgroups, while several states, including Colorado, Utah, Massachusetts and Texas as examples, have proposed legislation requiring algorithmic impact assessments for AI used in healthcare decisions. These state-level legislative efforts, some of which focus on risk-management, transparency, provider oversight and the use of AI by health insurance carriers, can be both additive and distinct from the FDA regulatory efforts and require attention.

Healthcare systems deploying AI should consider building on FDA’s efforts to assess training data diversity, conduct performance validation across demographic subgroups, monitor real-world performance stratified by race, ethnicity and gender, establish governance processes to review potential bias and ensure clinical staff are trained to recognize potential AI bias.

Medical Liability and Standard of Care Considerations

AI use in clinical practice raises novel liability questions. If an AI algorithm misses a diagnosis or provides incorrect recommendations, who bears responsibility—the healthcare organization or the algorithm developer?

Courts have not yet definitively resolved AI liability issues, but several principles are emerging. Physicians retain ultimate responsibility for patient care decisions and cannot delegate medical judgment to AI. But it is possible that the standard of care may evolve to incorporate AI use where tools are widely adopted. Failure to use available AI tools could potentially constitute negligence in some contexts. Documentation of AI tool use and physician reasoning in overriding recommendations will be important in defending malpractice claims. In addition, healthcare organizations face potential institutional risk for negligent selection of AI tools without adequate validation, failure to train staff on appropriate AI use, inadequate monitoring of AI performance and continued use after known performance problems.

Healthcare organizations should establish AI governance committees reviewing tool selection and deployment, conduct rigorous validation before clinical deployment, implement monitoring systems to detect performance degradation, develop policies on AI use documentation and override procedures and review professional liability insurance coverage for AI-related risks.

Implications for Healthcare Stakeholders and Future Outlook

Several policy issues will shape AI's future in healthcare: how to integrate clinical decision support software, how to enable adaptive algorithms while ensuring safety, how to facilitate data access for AI development while protecting privacy, how to address liability for AI-assisted decisions and how to promote algorithmic fairness and health equity.

Healthcare organizations should monitor regulatory developments, build institutional capabilities for AI governance, invest in workforce training on AI use and limitations and develop strategic plans for AI adoption aligned with clinical and financial objectives.

  • Health systems and Healthcare Providers: Develop AI governance frameworks and oversight committees while also conducting rigorous validation and bias assessments before clinical deployment. Clinical staff should be trained on appropriate AI use and limitations, monitor real-world AI performance and continuously evaluate AI-related patient outcomes.
  • AI Developers and Health Technology Companies: Engage FDA and CMS early and plan a comprehensive regulatory strategy. Developers should prioritize diverse training data and fairness assessments while also creating post-market surveillance capabilities. Building evidence of clinical utility to support reimbursement and addressing liability and insurance considerations in business models should be an area of ongoing focus during development.
  • Physicians and Clinical Professionals: Clinicians have an inherent duty to maintain professional skepticism and clinical judgment when using AI tools. Transparency in AI performance metrics and understanding the reasoning behind clinical decision recommendations is important for delivering safe healthcare.
  • Health Plans and Payers: The coverage environment for AI-enabled services is highly fluid. Plans and payers are encouraged to develop coverage policies for AI-enabled services based on solid clinical evidence. Value-based contracts incorporating AI may help improve quality and efficiency, and it will be important to monitor AI impact on care patterns and costs.
  • Healthcare Investors: The regulatory and reimbursement pathways for AI related investments will have a significant impact on portfolio companies. Investors will need to conduct diligence on algorithm validation, bias assessment and clinical evidence to evaluate the competitive position of AI healthcare technologies while also understanding potential liability implications for portfolio companies.

Artificial intelligence in clinical medicine presents a transformative opportunity in a complex regulatory and reimbursement atmosphere. Success requires coordination across clinical, technical, regulatory and business domains. Akin Gump's healthcare policy, regulatory and digital health practices can help advise clients on AI strategy, FDA regulation, reimbursement advocacy and clinical integration.

Share This Insight

Related Services, Sectors, and Regions

© 2025 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.