Does AI Care About Caremark? Applying the Core Principles of Corporate Governance to Artificial Intelligence Integration

It started with a routine task: a mid-sized publicly traded company was preparing a quarterly earnings call. An internal team, aiming to streamline the chief executive officer’s (CEO) talking points, employed an artificial intelligence (AI) tool to draft responses to anticipated investor questions, including those related to climate risk disclosures. The generated answers appeared confident and data-rich, citing regulatory provisions and market data. None of it was thoroughly vetted. One statement in particular mischaracterized the company’s compliance with emerging disclosure requirements promulgated by the U.S. Securities and Exchange Commission (SEC). When analysts and regulators identified the error, the company faced an inquiry, a dip in share price and intense scrutiny from shareholders. What began as a time-saving measure quickly escalated into a governance crisis, undermining the company’s financial reporting credibility and raising serious questions about how management supervises the use of AI, and whether the board had fulfilled its duty of oversight.
Embarrassing and costly incidents like this are legion, and they underscore the pressing need for robust governance and oversight mechanisms when deploying AI tools. AI promises to revolutionize how we work by increasing productivity and dramatically reducing costs; however, the technology is far from perfect. The results generated by AI tools are heavily dependent on a user’s prompt (garbage-in, garbage-out). It misses things. It hallucinates.1 It makes errors of analysis and judgment that users cannot identify without the subject matter expertise and time-intensive critical thinking that AI purports to make unnecessary.
These risks weigh particularly heavily on corporate boards because the consequences of making a wrong or uninformed decision can result in embarrassment, financial loss for the company and, in the worst-case scenario, credible claims for breach of fiduciary duties. The foundational Caremark case2 established the principle of Delaware law that boards, among their other fiduciary duties, must undertake a duty of oversight to make a good faith effort to put into place a reasonable board-level system of monitoring and reporting.3
In Caremark, the Delaware Court of Chancery held that, even when directors of a Delaware corporation are exculpated from liability for a breach of the duty of care or attention,4 they may nonetheless be held liable under a breach of loyalty theory if such directors have (1) utterly failed to implement any reporting system or controls and (2) consciously failed to monitor such a system. The bar for culpability is high: “only a sustained or systematic failure of the board to exercise oversight such as an utter failure to attempt to assure a reasonable information and reporting system exists will establish the lack of good faith that is a necessary condition to liability.”5 But the novelty and rapid pace of AI adoption makes it plausible that even otherwise well governed companies may lack the minimum reporting systems required for boards of directors to satisfy their duty of oversight. Under Caremark, a patchwork system of oversight at the operations level is not sufficient; boards of directors must take an enterprise-level view of AI risk.
In other areas, corporate law or securities regulation creates a baseline above which directors can reasonably expect not to be deemed “utter failures”. Public company audit committees, for example, must have charters6 and be composed of independent directors.7 Those companies must also adopt and publicly disclose codes of ethics and policies prescribing guidelines for insider trading, related party transactions and clawback of compensation in the event of an accounting restatement. But regulation of artificial intelligence is fractured and lags behind this quickly evolving technology. As a result, there is no baseline level of compliance that would allow directors to feel wholly confident that they have adequately fulfilled their fiduciary duties. Instead, in the absence of clear regulatory guidelines, boards of directors must fall back on the fundamental principles of corporate governance: Take responsibility. Take charge. Take care.
Take Responsibility
Building a Culture of Accountability Starts at the Top
Corporate boards and managers are now expected to demonstrate their commitment to innovation and efficiency by adopting AI tools across their organizations.8 But as even the most diehard AI advocates will tell you, “[a] robust human review and fact-checking system can prevent AI tools from compromising the reliability of the organization’s decisions and services. A human-centric approach to AI adoption enhances professional capabilities while maintaining the necessary preeminence of human oversight and judgment.”9 The necessity of AI integration may be non-negotiable, but so too is the necessity of stringent oversight and clear accountability.
Equipping your organization to develop, deploy and use AI responsibly starts with education in what AI can do (and, importantly, what it can’t). From the C-suite to the newest hire, employees need practical, role-based training to maximize productivity gains from efficient AI usage and to avoid the embarrassing missteps10 that can occur when users put too much faith in a technology that is still immature. Managers should receive additional instruction on oversight responsibilities.
Human-centered oversight and meaningful training cannot be accomplished with isolated or ad hoc AI deployments within individual teams or functions. Because AI tools increasingly influence decisions across the enterprise, companies should avoid fragmenting their use of AI and instead adopt a coordinated, enterprise‑wide governance framework that aligns education, oversight and risk management across the organization. A unified approach enables consistent standards for human review, clearer lines of responsibility and more effective escalation of issues as they arise, while reducing the risk that uncoordinated use or uneven training will expose the company to operational, legal or reputational harm. This positions boards and management to demonstrate meaningful oversight of AI‑related risks.
At the same time, management owes it to employees and to the business’s future to ensure that employees are learning the basics. Employees must have the substantive skills and practical knowledge required to do their jobs today and in whatever future the AI-assisted workplace creates. Equipped with the fundamental expertise of their profession, employees will be better positioned to integrate AI tools safely and oversee them effectively. All employees should understand that they remain accountable for work product, judgment and decision-making even when using AI tools.
Boards and management must also wrestle with the ethical pitfalls of AI with their corporate values in mind, including the privacy and cybersecurity implications that accompany AI deployment.11 Environmental impacts, discrimination and potential misinformation, and the collection, use and protection of personal and sensitive data are all critical considerations as companies deploy AI tools across organizations, often well before one confronts the downstream liabilities associated with cybersecurity breaches, data leaks or misuse of AI systems. These overlapping concerns underscore that the practical, operational, financial, legal and ethical implications of AI cannot and should not be ignored. By taking these risks seriously and clearly communicating how AI tools align with corporate culture, values and risk tolerance, boards can set a tone at the top that insists on the thoughtful, secure and responsible adoption of these cutting-edge technologies.
|
Prompts for Board Discussion |
|
|
Take Responsibility |
|
Take Charge
Proactive Oversight Requires Thoughtful Policies and Procedures
A comprehensive, up-to-date corporate policy on governing the development, deployment and use of AI is a critical tool in shaping AI adoption to maximize positive engagement and minimize risk. Far from discouraging AI adoption with a litany of dire warnings, an effective AI policy should act as a mission statement to clearly frame how AI supports the company’s strategic business goals and cultural values across the AI lifecycle. Such a policy should identify explicitly approved AI tools and use cases, establish guardrails for internal and third-party development, and provide practical guidance for users, including usage examples and prompts for large language models that improve the quality of outputs, facilitate verification against reliable sources and reinforce human oversight.
As a corollary, there should be meaningful cautions and warnings, presented alongside resources and reporting channels, to ensure any problems are brought to management’s attention and resolved by a competent team with appropriate expertise. Boards should periodically review and update policies to ensure compliance with evolving regulatory requirements and industry best practices, including with respect to regulations and practices that may differ (or even conflict) across jurisdictions.
Implementing and enforcing effective AI policies and procedures will require appropriate subject matter expertise among the company’s directors. Consider whether to make experience with AI (and related cybersecurity and data protection topics) a criterion for at least one director sitting on the company’s audit, risk or corporate governance committee, and adding responsibility for AI oversight to that committee’s charter. In addition, boards should set aside time for periodic updates and continuing education regarding AI matters within the market, including how AI is being used within a company’s specific sector.
Next, specific business units should refresh their standard contracts and internal protocols to ensure they adequately mitigate AI-related risks. Confidentiality and data-use agreements with employees, contractors, suppliers and customers should, for example, include explicit prohibitions on uploading personal, sensitive or confidential data to online AI tools and large language models. Suppliers and customers, particularly those providing AI-enabled products or services or processing data on the company’s behalf, should be screened for proper AI risk management compliance. Mergers and acquisitions (M&A) due diligence also should include a focus on AI-related red flags such as training or deploying AI systems using personal data without proper notices, consent or lawful basis, or in breach of the Health Insurance Portability and Accountability Act (HIPAA)/Gramm-Leach-Bliley Act (GLBA)/General Data Protection Regulation (GDPR) or newer state privacy and AI regimes like the Colorado AI Act and the California Consumer Privacy Act (CCPA), particularly in safety‑critical, employment, credit, insurance, health, or other “high‑risk” contexts with no human‑in‑the‑loop review, audit trails or documented risk classification.12
|
Prompts for Board Discussion |
|
|
Take Charge |
|
Take Care
High-risk Topics Demand Particular Attention
We have shared a schadenfreude-tinged laugh at professionals, companies and even government agencies13 that made embarrassing mistakes after trusting an AI tool without verification. But the reputational and financial damage to a company that abdicates its responsibility to use AI safely can be serious. To avoid being the punch line of a very expensive joke, corporate boards should reflect carefully on industry- and profession-specific areas of risk connected with AI adoption and tailor their training, policies and governance structures to those particular risks.
For example, AI-enabled data leaks or cybersecurity breaches are an emergency for corporate boards, as undisciplined or naïve use of new technology can lead to significant human cost and material financial, reputational and even legal damage. Cybersecurity breaches can occur, for example, via internal breaches of AI best practices (for instance, an employee uploading personal or sensitive data onto commercially available online large language models) or via the use of AI tools by malicious actors to launch more effective and numerous cybersecurity attacks.14 Leadership must ensure that cybersecurity and data protection policies and practices are robust enough to protect against the increased risks that AI poses. In the meantime, consider increasing your insurance coverage.
For public companies, another key priority is getting disclosure right. The SEC and investors are increasingly focused on AI-related disclosure from a variety of angles. For instance, on December 4, 2025, the Investor Advisory Committee of the SEC issued a formal recommendation that companies disclose (i) how they define “Artificial Intelligence”, (ii) board oversight mechanisms, if any, for overseeing the deployment of AI and (iii) if material, how they are deploying AI and the effects of AI deployment on internal business operations and consumer facing matters.15 Proxy advisory firm Glass Lewis16 included in its 2026 policy guidelines recommendations for companies to disclose information relating to their AI deployment, risks and governance. In light of increased regulatory and investor attention, public company disclosure must be calibrated to avoid any suggestion of AI washing17 or inadequate disclosure18 of AI-related risks. Companies may also find themselves under increased pressure to justify significant investment in AI tools or strategic pivots into AI products and services if fears of an AI bubble19 prove to be true. And if this seems like walking a tightrope, remember: “Everything is Securities Fraud? (with Matt Levine)”20.
Other key industry-specific risks include:
- Potential data breaches caused by the use of AI in sectors with significant access to sensitive personal data such as finance and health.
- Lack of human oversight over AI decision-making in safety-critical industries such as transportation and infrastructure or when using personal or sensitive information particularly in high-risk contexts.21
- Plagiarism or intellectual property violations caused by AI content generation in creative industries and publishing.
- De-skilling caused by over-reliance on AI in highly specialized industries such as law, medicine and finance.
- Algorithms making discriminatory decisions in credit, insurance and pricing, particularly for the financial and insurance sectors.
- AI-driven robotics in manufacturing that lack adequate safety controls to protect human workers, causing workplace injuries.
|
Prompts for Board Discussion |
|
|
Take Care. |
|
Conclusion
Whether you view AI as a groundbreaking tool to fuel a productivity revolution or a dangerous, incompetent evolutionary step in technology’s march toward superhuman dominance, corporate boards must prioritize safe adoption through thoughtful risk oversight policies, robust (and ongoing) educational programs and a focus on high-risk areas from the very top of the organization. That prioritization must be coupled with clear oversight structures and accountability mechanisms that define who is responsible for AI-related decisions, risk management and escalation. By applying the fundamentals of good corporate governance to this emerging technology, including meaningful board-level monitoring and management accountability, corporate boards can fulfill their fiduciary duties, ensuring that their organizations move confidently into AI adoption while thoroughly mitigating attendant risks.
1 See, e.g., “Think Article: What Are AI Hallucinations?” IBM Think and “ChatGPT: What Are Hallucinations and Why Are They a Problem for AI Systems” BernardMarr.com.
2 See, e.g., our articles “In re Boeing: Revisiting Potential Director Liability Exposures” and “Mission Critical: Revisiting the Board’s Oversight Role After In re Boeing Co.”.
3 In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959 (Del. Ch. 1996).
4 The duty of care governs the adequacy of directors’ decision‑making processes and may be exculpated under DGCL § 102(b)(7), whereas the duty of loyalty requires directors to act in good faith and in the corporation’s best interests and is not subject to exculpation. This means that oversight failures constituting negligence are exculpable, but a sustained or conscious failure of oversight amounting to bad faith gives rise to non‑exculpable liability under a loyalty theory. See Stone v. Ritter, 911 A.2d 362, 369–70 (Del. 2006). See, also, “Speaking Sustainability: Navigating Delaware Law and Directors’ Duties”.
5 Id.
6 See, e.g., Item 407(d) of Regulation S-K.
7 See, e.g., Section 303A.07 of the NYSE Listed Company Manual and Section IM-5605-3(2) of Nasdaq’s Listing Rules.
8 See, e.g., “The state of AI in 2025: Agents, innovation, and transformation” McKinsey, November 5, 2025 (reporting, among other things, that 88% of respondent companies have used AI in at least one corporate function, compared to 50% five years ago).
9 See, e.g., “Ethics of Artificial Intelligence”.
10 See, e.g., “AI Assistants Make Widespread Errors About the News, New Research Shows” and “AI ‘Hallucinations’ in Court Papers Spell Trouble for Lawyers”.
11 See, e.g., “2026 Benchmark Policy Guidelines” (“Companies that use or develop AI technologies should consider adopting strong internal frameworks that include ethical considerations and ensure they have provided a sufficient level of oversight of AI.”).
12 The Colorado Artificial Intelligence Act, Colo. Rev. Stat. § 6‑1‑1701 et seq. (2024), imposes risk‑management, documentation and impact‑assessment obligations for certain “high‑risk” AI systems used in consequential decision‑making, while the CCPA, Cal. Civ. Code § 1798.100 et seq., regulates the collection, use, and disclosure of personal data and provides the primary statutory framework governing the lawfulness of AI training and deployment involving consumer information. The CCPA is supplemented by regulations issued by the California Privacy Protection Agency governing automated decision‑making technology (ADMT), which impose notice, access, opt‑out, and risk‑assessment requirements for certain AI‑driven decision processes.
13 See, e.g., “White House Health Report Included Fake Citations”.
14 Examples of internal policy failures include engineers uploading source code and internal materials to OpenAI’s ChatGPT, resulting in the exposure of proprietary information and prompting company‑wide restrictions on generative AI use. An example of an external attack enhanced by AI includes the documented use of generative AI by cybercriminals and state‑backed threat actors to create highly convincing phishing and social‑engineering campaigns at scale, automate malware development and personalize attacks, as reported by Google’s Threat Intelligence Group, and other cybersecurity firms observing a sharp increase in AI‑enabled phishing, business email compromise and adaptive malware techniques.
15 “Recommendation of the SEC Investor Advisory Committee Regarding the Disclosure of Artificial Intelligence’s Impact on Operations”, Investor Advisory Committee (December 4, 2025).
16 “2026 Benchmark Policy Guidelines” (“[A]ll companies that develop or employ the use of AI in their operations should provide clear disclosure concerning the role of the board in overseeing issues related to AI, including how companies are ensuring directors are fully versed on this rapidly evolving and dynamic issue.”).
17 See, e.g., “What Is AI Washing and Why Companies Need to Stop Exaggerating Their AI Prowess”.
18 See “AI Risk Disclosures in the S&P 500: Reputation, Cybersecurity, and Regulation”.
19 See “Something Ominous Is Happening in the AI Economy”.
20 Matt Levine, “Everything is Securities Fraud? (with Matt Levine) (June 26, 2019).
21 Under the California Privacy Protection Agency’s new ADMT regulations, a system is regulated as ADMT where it replaces or substantially replaces human decision-making, meaning the business relies on the system’s output without human involvement. Human involvement requires that a human reviewer understands how to interpret the output, independently analyzes it along with other relevant information and has authority to change or override the decision. Where such human involvement is present, the ADMT requirements may not apply; where it is absent, businesses must comply with ADMT notice, opt‑out and access obligations. Violations of applicable ADMT requirements are subject to enforcement by the California Privacy Protection Agency or Attorney General, with civil penalties of up to $2,500 per violation or $7,500 per intentional violation, as well as injunctive relief.








