The Challenges of Optimising AI Responsibly for Business Success
Artificial Intelligence has the potential to radically change how businesses of all sizes work. It’s very likely that many of us already interact with AI on a daily basis whether that’s talking to a chat-bot when contacting our bank, using a voice-activated virtual assistant, receiving TV or film recommendations from a streaming service or software that decides the temperature of our home. AI has never been so in demand and it is now time to focus on how businesses can use this technology responsibly and effectively for their needs.
In this webinar the Lloyds Banking Group Centre for Responsible Business invited a panel of highly-acclaimed expert speakers to provide an overview of the challenges and considerations of the responsible use of AI for businesses today.
This ambitious webinar asked each speaker to give a ten minute overview of their area of expertise covering topics such as: the rise of AI-enabled corporate decision-making; ethical AI and AI governance frameworks including Data Trusts; privacy and data protection and the use of AI in banking. Each ten minute presentation was filled with the most up-to-date information and expert insight and gave practical tips to businesses as well as where to go for more information.
After an introduction from our Director, Professor Ian Thomson, Professor Ganna Pogrebna gave an overview of AI decision-making in the corporate world. Ganna discussed the differences between AI based decision-making technologies currently in use versus technologies that may appear in the future. This included decision-making without the need for a human-input, a feat of AI that is not yet demonstrable, and debunked myths about AI perpetuated in films. ‘Machines and AI are not currently advanced enough to make autonomous decisions alone – they work based on algorithms and predetermined functions set by engineers. In order to ensure ethical use of AI, businesses must first ensure these machines use customer information sensitively and ethically'.
Professor Sylvie Delacroix presented on the fascinating topic of data and data trusts, the use of which is quietly revolutionising the way we organise business endeavours. Businesses recognise the inherent potential in data that we gather but there are concerns about how best to use this in ethical and responsible ways. ‘Today we have regulatory structures like GDPR. Some see GDPR as a barrier to innovation but if you empower groups of people to know and pool their rights under the act, relying on Trust Law to create new ‘bottom-up’ legal structures, we could create significant social and economical benefits. With data trusts, it becomes possible for organisations to both protect and respect their customers’ rights over the use of their data and also unlock the rich potential in the data sets.’
Following on from discussion of the issues around the responsible use of customers’ data, Dr Immaculate (Mac) Motsi-Omoijiade spoke about AI in the banking industry and the challenges and opportunities related to data governance. ‘When we speak about the ethical use of AI in banking, there’s a trust imperative; banks need to be hypersensitive in how they’re handing their customers’ data. There’s a heavy compliance burden. Having said this, there’s also a ‘Know your customer’ imperative for banks.’ The use of AI in banking is not new and Mac talked through the various ways in which banks are using this technology already in areas such as customer support, credit and loan decisions or fraud protection. As Mac went through the challenges banks face in their use of AI she left the audience with the suggestion, based on her previous research, that ‘AI can be seen as both the curse and the cure. AI contains the solutions to solve the problems that it creates.’
In our final presentation, Dr Alexandra Giannopoulou, Attorney-in-Law and Postdoctoral Researcher at Institute for Information Law at the University of Amsterdam, provided an overview of the challenges of AI data protection in general and specifically the practices we should be mindful about when debating the development of AI. ‘From a regulatory perspective, there is no silver bullet when it comes to AI systems; this is due to the diverse fields of application, the fluidity of the data used and processed, and the transnational nature of data flows. Currently, data protective measures for AI systems are mainly focused on data anonymisation and ensuring valid consent. However, there is considerable compliance uncertainty surrounding these concepts when applied to AI. Thus, new pathways are being explored in order to help create ‘data protection-aware’ AI.