Effective AI Governance for Boards
Boards can ensure robust AI governance through data management, integration, ethics, and cybersecurity practices to meet regulatory standards while also creating corporate value.
Build a strong data foundation for effective AI use.
Integrate AI responsibly, maintaining a solid culture of ethics and risk management.
Extend cybersecurity practices to address AI vulnerabilities.
This article is written by Lynn Warneke, who brings a fresh perspective as a guest author.
Australia’s prudential regulator, APRA, recently supported entities testing generative AI, highlighting the potential benefits for businesses and customers. Specifically, APRA advised that entities with robust technology platforms and a strong risk management track record should feel confident experimenting with AI. At the same time, those lacking in these areas should proceed with caution.
A recent post on the Harvard Law School Forum for Corporate Governance outlines four ways boards can provide effective AI governance and support their companies in gaining strategic advantage:
1. Build a Strong Foundation for Data Use
Accurate, legally sourced, and well-maintained data is essential for effective AI. Boards must ensure their organisations establish data quality, integrity and reliability processes and systems. Data governance includes clear data management policies and ownership, regular audits, and compliance with relevant regulations such as the Privacy Act.
2. Support the Integration of AI
Boards are pivotal to encouraging a measured pace for AI integration, allowing for experimentation and pilots within clear guardrails. This approach helps mitigate risks while fostering innovation. Establishing a structured framework for AI deployment, including strategic alignment, success metrics, and risk mitigation strategies, can help ensure that AI initiatives align with business goals and regulatory requirements.
3. Weave Responsible AI into the Company Fabric
Boards must set the tone from the top, establishing an appropriate risk culture and demonstrating a commitment to responsible AI practices. This begins with developing ethical principles for the company’s use of AI, and fostering a culture of accountability and transparency.
4. Enhance Board Oversight Effectiveness
Having the right expertise available, in the right board or committee structure, will ensure meaningful discussions about technology risks and opportunities occur. Boards may consider including AI and technology experts as advisors to provide insights into complex AI issues. Regular training sessions and workshops on AI developments and risks can also enhance board members’ understanding and oversight capabilities. Appointing directors with appropriate backgrounds and expertise to the board should also be considered, particularly where AI will be a strategic differentiator for the company.
In addition to these core principles, there are further priorities that boards should focus on to ensure comprehensive AI governance:
5. Extend Cybersecurity Scope to Include AI
Boards should understand AI’s cyber vulnerabilities and how to harness its defensive strengths. This includes integrating AI-specific security measures into the broader cybersecurity framework, conducting regular security risk assessments, and staying informed about emerging AI threats and mitigation strategies. AI can also enhance cybersecurity by automating threat detection and responding to vulnerabilities more rapidly and effectively.
6. Establish Strong AI Procurement Practices
Both providers and deployers of AI have accountabilities and could each be liable for harms arising from corporate misuse, and boards should be alert to legal grey areas. Establishing rigorous procurement processes and clear contractual agreements with AI vendors may help delineate responsibilities and liabilities. Expert legal advice may be warranted.
7. Prioritise Staff Training and Workforce Planning
Immediate capability development and long-term workforce transition planning are crucial in the digital era. Boards should advocate for continuous AI education and training programs, ensuring employees have the skills to work alongside AI technologies. Additionally, planning for workforce transitions, including reskilling and upskilling initiatives, will be needed to manage the significant impact of AI on organisational capability, resilience and productivity.
8. Integrate AI into Sustainability Strategies
Consider the environmental impact of data storage, processing, and technology use. Boards should ensure that AI initiatives align with the company’s sustainability goals and explore energy-efficient AI technologies and practices. This includes assessing the carbon footprint of AI operations and implementing measures to reduce environmental impact, such as optimising data centres, investing in renewable energy sources and responsibly managing e-waste.
Adding to these perspectives, an AI consultant with Time Under Tension, Josh Rowe, emphasises the importance of continuous monitoring and regular audits for AI systems. “It’s crucial for organisations to regularly evaluate AI outputs for biases and ensure compliance with evolving regulations. Transparency and accountability are key to maintaining trust and achieving long-term success with AI,” says Rowe.
These perspectives from APRA and EY, as detailed in the Harvard Law School Forum, point to a model that boards can apply to AI governance. For more detailed insights, refer to the Harvard Law School Forum and APRA’s remarks to the AFIA Risk Summit 2024.
Taking a comprehensive and methodical approach to AI governance should enable boards to navigate the complexities, challenges and opportunities successfully. By addressing the key areas outlined above, boards can help their organisations leverage AI responsibly, securely and sustainability, while creating operational efficiencies, fostering innovation and ultimately driving strategic advantage.
About the Author
Lynn Warneke is the Chair of South East Water Corporation and a Non-Executive Director of Spirit Technology Solutions (ASX:ST1), with extensive experience across regulated industries and high-growth sectors, including Utilities, Education, Government, Technology & Cyber Services, Telecommunications, and FinTech. Specialising in digital strategy, AI and data governance, cybersecurity, and emerging tech regulation, she also serves on the ACS AI Ethics Committee and mentors start-ups at Stone & Chalk. With a COO/CIO and Deputy CDO background, Lynn has led award-winning digital and data strategies, driving tech-enabled growth and business performance. She strongly advocates for technology ESG, diversity and inclusion, and digital era governance and regularly shares her expertise: Activity | Lynn Warneke | LinkedIn.
Thanks for sharing your insights and perspectives. Wholeheartedly agreed that boards require an emphasis of AI governance, in particularly to the myriad of risks AIs create or attach themselves to. Through some of our deep AI audits and in presenting the findings to executives and boards, we often find that the risk issues of compliance and ethics are often discussed with initiatives to manage underway. But we also often see there be inadequate visibility of the operational, financial and value risks and the lack of management capabilities to mitigate them