The rapid adoption of artificial intelligence across Australian businesses has created a new frontier for directors' duties. Boards that once focused on financial oversight and strategic direction must now grapple with algorithmic decision-making, data governance, and the ethical implications of deploying AI systems that affect employees, customers, and the broader community.
Under sections 180 to 184 of the Corporations Act 2001 (Cth), directors owe duties of care, diligence, good faith, and the obligation not to misuse their position or information. These duties are technology-neutral — they apply regardless of whether decisions are made by humans or assisted by machines. However, the practical application of these duties is being reshaped by the AI revolution.
The duty of care and diligence under section 180 requires directors to inform themselves about the AI systems their organisations deploy. This means understanding, at a governance level, how these systems work, what data they consume, and what risks they present. A director who rubber-stamps an AI deployment without adequate inquiry may be exposed to a breach of duty claim if that system causes harm.
Western Australian businesses operating in the resources sector face particular challenges. AI is increasingly used in mine planning, environmental monitoring, and safety systems. A failure in any of these areas can have catastrophic consequences — both human and financial. Directors must ensure that adequate oversight frameworks are in place and that AI systems are subject to regular audit and review.
The business judgment rule under section 180(2) provides a safe harbour for directors who make informed decisions in good faith. To rely on this protection in the context of AI, directors should ensure they have obtained independent expert advice on AI risks, documented their decision-making process, and implemented ongoing monitoring mechanisms.
We recommend that boards establish an AI governance framework that includes clear policies on AI procurement and deployment, regular reporting to the board on AI performance and incidents, independent audits of high-risk AI systems, and training for directors on AI literacy. The cost of inaction is significant — not only in terms of regulatory exposure but also reputational risk in an era where stakeholders expect responsible AI governance.