Sunday, 30 December 2018

Book review: Prediction Machines

Artificial intelligence is fast becoming one of the dominant features of narratives of the future. What does it mean for business, how can businesses take advantage of AI, and what are the risks? These are all important questions that business owners and managers need to get their heads around. So, Prediction Machines - The Simple Economics of Artificial Intelligence, by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, is a well-timed, well-written and important read for business owners and managers, and not just those in 'technology firms'.

The title of the book invokes the art of prediction, which the books defines as:
[p]rediction takes information you have, often called "data", and uses it to generate information you don't have.
Students of economics will immediately recognise and appreciate the underlying message in the book, which is that:
[c]heaper prediction will mean more predictions. This is simple economics: when the cost of something falls, we do more of it.
So, if we (or to be more precise, prediction machines) are doing more predictions, then complementary skills become more valuable. The book highlights the increased value of judgment, which is "the skill used to determine a payoff, utility, reward, or profit".

The book does an excellent job of showing how AI can be embedded within and contribute to improved decision-making through better prediction. If you want to know how AI is already being used in business, and will likely be used in the future, then this book is a good place to start.

However, there were a couple of aspects where I was disappointed. I really enjoyed Cathy O'Neil's book Weapons of Math Destruction (which I reviewed last year), so it would have been nice if this book had engaged more with O'Neil's important critique. Chapter 18 did touch on it, but I was left wanting more:
A challenge with AI is that such unintentional discrimination can happen without anyone in the organization noticing. Predictions generated by deep learning and many other AI technologies appear to be created from a black box. It isn't feasible to look at the algorithm or formula underlying the prediction and identify what causes what. To figure out if AI is discriminating, you have to look at the output. Do men get different results than women? Do Hispanics get different results than others? What about the elderly or the disabled? Do these different results limit their opportunities?
Similarly, judgment is not the only complement that will increase in value. Data is a key input to the prediction machines, and will also increase in value. The book does acknowledge this, but is relatively silent on the idea of data sovereignty. There is an underlying assumption that businesses are the owners of data, and not the consumers or users of products who unwittingly give up valuable data on themselves or their choices. Given the recent furore over the actions of Facebook, some wider consideration of who owns data and how they should be compensated for their sharing of the data (or at least, how businesses should mitigate the risks associated with their reliance on user data) would have been timely.

The book was heavily focused on business, but Chapter 19 did pose some interesting questions with application to AI's role in wider society. These questions do need further consideration, but it was entirely appropriate that this book highlighted them while leaving the substantive answers to some other authors to address. These questions included, "Is this the end of jobs?", "Will inequality get worse?", "Will a few huge companies control everything?", and "Will some countries have an advantage?".

Notwithstanding my two gripes above, the book has an excellent section on risk. I particularly liked this bit on systemic risk (which could be read in conjunction with the book The Butterfly Defect, which I reviewed earlier this year):
If one prediction machine system proves itself particularly useful, then you might apply that system everywhere in your organization or even the world. All cars might adopt whatever prediction machine appears safest. That reduces individual-level risk and increases safety; however, it also expands the chance of a massive failure, whether purposeful or not. If all cars have the same prediction algorithm, an attacker might be able to exploit that algorithm, manipulate the data or model in some way, and have all cars fail at the same time. Just as in agriculture, homogeneity improves results at the individual level at the expense of multiplying the likelihood of system-wide failure.
Overall, this was an excellent book, and surprisingly free of the technical jargon that infest many books on machine learning or AI. That allows the authors to focus on the business and economics of AI, and the result is a very readable introduction to the topic. Recommended!

No comments:

Post a Comment