One of the most powerful new technologies in business today is artificial intelligence (AI), which is transforming operations across the board. Sales, marketing, finance and supply chains are all employing machine learning and algorithms to automate human tasks, target customers, protect against fraud and anticipate demand – to name just a few of their many applications. And there are benefits for responsible business, too. According to Professor Al Naqvi of the American Institute of Artificial Intelligence, AI could help remove human bias from sustainability decision-making, identify better ways to measure impacts, draw on global and dynamic data to monitor progress, and help integrate the Global Goals into a company’s overall business strategy.
But there are many serious concerns about the consequences and ethics of AI use in business, which range from data misuse and a lack of knowledge or oversight from company leaders, to mass staff redundancies and automating discrimination. Sadly, most of our datasets are outdated, destructive, exploitative and rife with racism, sexism and other unsustainable logics. Setting AI loose on this data will amplify, not mitigate, our destructive capacity. See how the error rates for facial recognition technology are higher for women and people from minority ethnic groups because the algorithms ‘learn’ from analysing millions of faces scraped from the internet, which are predominantly white and male. Business has evolved to destroy well enough without AI; the question is how to use AI to change things when most of our datasets are so biased. Government policy and the law are struggling to understand and keep up with all the implications of this new technology, too, so responsible businesses must look to thought leaders such as the Future of Life Institute for best practice guidance.
The charity’s ‘Asilomar principles’ cover the research-related, ethical and long-term issues of AI in 23 points – all of them relevant to business. They recommend programmers create ‘beneficial intelligence’ that complements human agency and decision-making and not ‘undirected intelligence’ that replaces it. All AI systems and their decisions should therefore be transparent and auditable by a human authority. The guidance also suggests companies that use AI are responsible for ensuring it doesn’t infringe on people’s human rights, personal liberties and data privacy, and that the technology should aim to benefit and empower as many people as possible. Responsible business leaders need to embed such principles in their own firm’s AI practices, regularly reappraising their implementation to make sure they keep ahead of this rapidly evolving technology so it continues to align with their values and purpose.
Much has correctly been made of the dangers of a robot CEO and the dystopian ‘self-driving-autonomous’ business. But the reality is that business leaders already make extensive use of digital technology in decision-making and ignore the dystopian present we have managed to create without AI. David De Cremer, in his 2020 book Leadership by Algorithm: Who Leads and Who Follows in the AI Era?, maps out this conflict and concludes that AI is unlikely to lead businesses, but has far greater potential to administer them while avoiding the System 1 biases of humans if done responsibly. But how it is designed and used by humans is critical to ensure that what AI provides is beneficial intelligence for responsible leaders.