AI can improve lives, but only if it’s built responsibly
Toju Duke, founder of Diverse AI, Author of 'Building Responsible AI Algorithms', and former Responsible AI Lead at Google, explores AI as a force for good.
Toju Duke, founder of Diverse AI, Author of 'Building Responsible AI Algorithms', and former Responsible AI Lead at Google, explores AI as a force for good.
Over 200 University staff, students and members of the community attended this year’s annual Birmingham Business School Advisory Board Guest Lecture delivered by Toju Duke.
Toju is a thought leader, author and advisor on Responsible AI with over 18 years’ experience spanning across advertising, retail, not-for-profit and tech. She worked at Google for 10 years, an experience that included leading various Responsible AI programmes across Google’s product and research teams. Toju is also the founder of Diverse AI, a community interest organisation with a mission to support and champion underrepresented groups to build a diverse and inclusive AI future. She provides consultation and advice on Responsible AI practices worldwide.
In the wake of ChatGPT, AI is now seeing the light of day, 66 years after its first naming convention. While it could be a force for good, it also has several limitations, risks and challenges that could be detrimental to human lives. Toju’s lecture focused on the responsible use of AI and ways to prevent further harm, reflecting the Business School’s commitment to responsible business.
Toju began with a question to the audience to discover who embraced and who was against AI, seemingly surprised by the number who chose the latter. Toju went on to talk about its practical purpose, asking the audience to look at AI in a more positive slant since the purpose of AI, and technology more generally, is primarily “to improve people’s lives and make the world a better place”. However, Toju recognised it’s not without its faults.
Toju took the audience through the timeline of AI from its beginnings in the 1600s with the invention of the first mechanical calculating machine. With the fathers of AI always referenced, Toju also wanted to highlight the mother of AI, Ada Lovelace, for the first programmable machine in 1837. Toju went on to explain that this “already shows how misogynistic the technology is”.
This stereotyping and prejudice is seen in the naming of AI assistants, Toju explained, who are always named after women. “It goes back to saying women are meant to be our assistants, are meant to always help”.
Eliza was the first chatbot created back in 1965 which led to the development of a large number of models. Bringing the audience right up-to-date with the latest advances in generative AI, Toju explained how the technology is not benefiting everyone. However, with AI here to stay, Toju emphasised the need to make the technology beneficial and fair for all, which is where responsible AI comes in.
At this point, Toju shared examples of AI used as a force for good. Toju explained how it’s being used to eradicate hunger, protect the environment, support education, and help with crisis response. Toju highlighted the specific example of ‘Project Euphonia’, a Google Research initiative focused on helping people with non-standard speech be better understood.
Toju then turned to the challenges of AI which include social inequalities, data leakages, disinformation, human rights violations, psychological safety, and privacy violations. Toju gave examples of these challenges, starting with the case of Robert Williams who was wrongfully arrested as a result of facial recognition. Another example was Amazon’s AI recruitment tool that showed a bias against women having been trained on data mostly from men.
Further examples included, Apple’s card being considered sexist, Google’s Photos app labelling a black couple as being "gorillas", the exam results algorithm that showed a bias against disadvantaged and ethnic minority pupils, the New York lawyers fined for submitting a legal brief that included fictitious case citations generated by ChatGPT, the Korean chatbot that made offensive remarks towards minority groups, the chatbot Replika encouraging a man to assassinate the Queen, Alexa telling a 10 year old to touch a live electrical plug with a penny, and the report earlier this year of a man having died by suicide after talking to a chatbot for six weeks which encouraged him to sacrifice himself to stop climate change.
Toju emphasised the implications of AI going wrong, revealing she often gets asked “how do I convince my manager to do responsible AI?”. As the examples show, the impacts can range from lawsuits and negative brand reputation to death.
Toju does, however, ultimately see AI as a force for good. She explained the Responsible AI Framework she has developed which considers AI principles, data, fairness, safety, explainability, humans being kept in the loop, privacy and robustness, and AI ethics, all of which ensure this technology is built responsibly and for its purpose of improving lives.
Toju ended on the note, “AI can be a force for good, it’s really helping to improve our lives, and it’s important we understand its pros, cons, and limitations and how to stop them, and how we can use it to be a friend and not a foe.”
Having delivered a thought-provoking lecture, Toju answered a range of questions from attendees, leading to discussions around future-proofing careers against AI, the impact of AI on younger generations, AI and colonialism, labelling AI as good or bad, equipping students the skills to work with AI, regulation, and fact-checking.
Professor Edgar Meyer, new Dean of the Business School, closed the lecture by thanking Toju and highlighting his key takeaway, “it’s about educating people about the challenges, as well as the opportunities”. Professor Meyer said Toju has presented the Business School with a challenge of owning this space, a challenge we embrace as a School with responsible business at its heart.