GOVERNING THE MACHINE: HOW TO NAVIGATE THE RISKS OF AI AND UNLOCK ITS TRUE POTENTIAL

AI is no longer a future concern – it is already reshaping our world. Leaders across sectors are racing to deploy generative AI and AI agents to stay competitive, cut costs and innovate. Research by Ernst & Young (EY) suggests that 75% of companies are now using generative AI, but only a third have responsible controls in place. It’s clear that deployment has outpaced governance posing significant legal, operational and reputational risks to organisations and undermining trust. These mission-critical challenges are tackled with remarkable clarity and optimism by Ray Eitel-Porter, Dr Paul Dongha and Miriam Vogel in their groundbreaking new book ‘Governing the Machine: How to navigate the risks of AI and unlock its true potential’ (Bloomsbury Business).
Revered for being leading voices in responsible AI and governance, Eitel-Porter is a Senior Research Associate at the Intellectual Forum, Jesus College, Cambridge, the former global head of Accenture’s Responsible AI practice and now advises multinationals and the public sector on AI governance. Dongha leads Responsible AI and AI Strategy at one of the UK’s largest banks, NatWest Group, ensuring innovation balances value creation with regulatory and customer protection. Vogel, as President and CEO of EqualAI and inaugural Chair of the US National AI Advisory Committee, champions AI governance and literacy to shape policy and safe adoption globally.
Governing the Machine is a culmination of their collective experience advising leading companies, institutions and governments. Drawing on numerous case studies and interviews with key AI executives at some of the largest companies, the book includes a foreword by AI veteran Andrew NG, founder of deeplearning.ai. The book has been described as a ‘timely and landmark book’ offering a global perspective grounded in best-in-class governance design, technical expertise, deep policy and legal insights.
Written as a practical, step-by-step guide built to endure even as technology rapidly shifts, the authors’ framework addresses nine core categories of risk: accuracy and reliability; fairness and bias; interpretability, explainability and transparency; accountability; privacy; security; intellectual property and confidentiality; workforce; environment and sustainability.
Having reviewed and synthesised leading AI risk frameworks, it proposes a comprehensive, adaptable and ethical approach to risk mitigation that can be tailored to the size and maturity of any organisation.
Securing trust in AI is essential if employees, investors and wider stakeholders are to feel comfortable adopting and using AI. In a world fraught with fearmongering over the risks posed by AI, particularly around generative AI and AI agents, Governing The Machine dials down the heat with its pragmatic and positive but inherently grounded perspective. The authors remind the reader that whilst the AI technical and legal landscape evolves daily, effective approaches for managing them are remarkably consistent across industries, countries and cultures. And whilst new laws are forthcoming, AI is already subject to a wide array of existing legal frameworks, from civil rights and consumer protection to privacy and product liability – all of which makes effective AI governance a business imperative.
Whether readers are just beginning their AI journey or assessing their organisation’s approach, this book is an essential read for business leaders, professional technologists, policy experts, compliance officers and anyone who wants to understand how to harness the transformative potential of AI, while navigating and mitigating its inherent risks.

