Toby Walsh, Machines Behaving Badly: Morality of AI (Melbourne Victoria; La Trobe University Press, 2022)
Reviewed by Shaza Arif
Looking ahead into the next three decades, there will be hardly any industry left untouched by Artificial Intelligence. It is due to this reason, that the impacts, challenges, and dilemmas associated with AI have been the focal point of discussion in the tech community. Authored by Toby Walsh, the book ‘Machines Behaving Badly: Morality of AI’ is another take on this subject. Walsh is the Chief Scientist at UNSW.ai, the AI institute of The University of New South Wales (UNSW), Sydney.
From the outset, Walsh clearly asserts that AI is an impressive and helpful tool, but it is often misused. By citing examples of individuals and tech corporations, he constructs a compelling argument about the mishandling of this technology. He explains that AI is sometimes employed as a facade to justify actions that would otherwise be deemed criminal, such as racial discrimination, automated theft, and information manipulation. After exploring the potential benefits, risks, and related concepts, the author concludes the book by advocating for limiting the role of machines in decision-making in the future.
Walsh offers a fascinating perspective on the roles of various actors engaging with technology. The categorisation of individuals and tech companies into groups with intriguing labels such as ‘transhumanists’, ‘techno-libertarians’, and the ‘new titans’, based on their AI perceptions, is particularly thought-provoking. Here he provides clear insights into the current tech landscape, highlighting the conflicting value alignment between society and industry. While society aspires to a welfare state, the industry seeks to increase societal dependency on its tech products and services. In fact, using examples of leading tech giants, the book illustrates how the problematic behaviour of machines is driven by the financial interests of corporations. These interests, the author argues, will only grow over time, exacerbating the troubling behaviour of AI systems.
The author highlights an essential and timely concept: how ‘deep learning’ has garnered significant attention, overshadowing other crucial aspects of AI. Individuals working in deep learning are often regarded as the ‘Godfathers of AI,’ while many others, particularly women who have made groundbreaking contributions to the field, have been unfairly sidelined. In fact, the book contains a comprehensive list of individuals who are still awaiting due recognition for their work, like Joy Buolamwini, Timnit Gebru, Margaret Mitchell Fei-Fei Li, and Cynthia Breazeal.
Toby Walsh urges readers not to be overwhelmed by the traditional narratives dominated by ideas of superintelligence and machines replacing humanity. Instead, he advocates viewing AI as a tool that empowers humans to foster innovation. He also challenges discussions around robot rights and responsible robots, arguing that these debates lack substantive value. The focus, he insists, should remain on human rights, which have been increasingly compromised since the advent of machines.
Although the book is concise, at just 276 pages (Kindle version) and easily readable in a single sitting, it addresses multiple subjects in a well-structured manner. Walsh’s subtle wit adds an engaging layer to the text, capturing the reader’s interest. The philosophical undertones about pain, suffering, and free will in the context of human-machine interaction make the book rather moving. Walsh maintains a balanced perspective throughout, repeatedly highlighting the misuse of technology across various sectors while also acknowledging its potential for societal benefit in the same industries. The examples provided are concise yet impactful, free from unnecessary details. Notably, the examples related to autonomous driving offer a compelling glimpse into a transformative future. The book’s 2061 epilogue leaves a lasting impression – it makes one to recognise that we are living in one of the most pivotal phases shaping the future of technology. How will future generations view the decisions being made today is the question that stuck with the most.
Machines Behaving Badly also serves as a critical reminder to policymakers that, despite rapid innovation, human presence and oversight need to remain central across all sectors. The author underscores the inherent risks of predictive analytics, particularly in areas like algorithmic sentencing, cautioning against over-reliance on AI due to its potential for error. A significant focus is placed on the integration of AI in the military sector, where Walsh raises serious concerns about the increasing autonomy of AI-powered weapons. He warns that such developments could escalate risks, effectively bringing danger closer to our doorsteps. Throughout the book, he firmly challenges the assumption that AI systems will inherently adhere to ethical and moral standards, cautioning that such expectations are misguided. As a result, he advocates for decisive policy measures, including the imposition of bans where necessary. He argues that even partial bans could serve as practical and effective tools to manage the risks associated with AI, ensuring its development aligns with societal safety and ethical considerations.
What sets this book apart is its ability to resonate with readers at different levels of expertise. For those with a foundational understanding, it serves as an accessible yet enriching guide. For seasoned professionals, its nuanced perspectives and incisive critiques make it a compelling and refreshing read. With its engaging style and rich content, the book not only informs but also inspires readers to think critically about the role of AI in shaping our future. It’s more than just a good read – it is a meaningful investment of time and thought, well worth the effort for anyone trying to understand the complexities of our AI-driven world.
Shaza Arif is a Research Associate at Centre for Aerospace & Security Studies (CASS), Islamabad, Pakistan. She can be reached at cass.thinkers@casstt.com.