Snake-Oil-Blog-Post.jpg
Artificial Intelligence (AI)

We need to talk about AI

November 23, 2019
4 mins
It is paramount we have proper discussions around AI and its social impact. Shutting down the argument and dismissing it as snake oil is not the best way forward.

Earlier this week, this tweet appeared on my feed.

Much of what’s being sold as "AI" today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides: https://t.co/iCpyFw5urN pic.twitter.com/pmOTI3vq8p

— Arvind Narayanan (@random_walker) November 19, 2019


Prof Narayanan's deck has reached a relatively sizeable audience and received widespread positive commentary. After all, anyone who has had any involvement with AI in the last few years can 100% relate with the catchy title of his deck.
Hey - we're just about starting to see a feeble light at the end of the blockchain tunnel, right?

But, but...
Something doesn't quite sit well with me. So I'm going to stick my neck out and respectfully reject, if not the conclusions, the way Prof Narayanan has chosen to make his point. Or rather, the way he has chosen to make his real point.

I find this deck deceptively coy, as the actual message is not "How to recognize AI snake oil" but rather "Why I don't want AI anywhere near socially sensitive topics", which would have been a perfectly valid argument, and made for a fantastic set of slides, but required a slightly different set of arguments and, most importantly, to be explicit about it.

At the end of the day ML/AI is about identifying patterns in data and draw inferences, so of course it can "predict the future". Google Maps does it all the times when it gives you an ETA for your journeys. In Prof Narayanan's own examples, bail decisions and border control are not performed by machines, but based on the prediction of machines. It's not about predicting the future: it's called decision support, performed at speed that is beyond human capabilities. Is the way banks and insurance companies make decision on mortgages and premiums snake oil too?
Prof Narayanan seems exceedingly preoccupied about "massive transfer of power from [...] workers to unaccountable tech companies" and points at "enormous commercial interests" that "we must resist" and "push back". He is of course completely entitled to give the whole argument a political spin, but I'm not sure this helps his arguments.

The real issue here is that some problems, and especially socially impactful ones, carry high personal risks if an inference is wrong (problem 1). Therefore, in order to minimise that, large amounts of potentially sensitive data may be required (problem 2) and in most, if not all, cases there is a strong argument for transparency, accountability and human-in-the-loop that makes such propositions much more expensive and challenging from both a technical (problem 3) and governance (problem 4) perspective.
Bias poses no doubt massive challenges (problem 5), not just in its identification and removal, but also in its very definition, as what turns common sense into bias is a function of our set of social and cultural values, which greatly varies in time and space. Should history be changed because it doesn't fit our Western 21st century view of the world? I don't have an answer, but at least I'm asking the question.
Also, there is the topic of regulation which can't be ignored in these cases: who establishes the boundaries? Who is it meant to enforce them? Do we need a universal legislation, or would locally applicable frameworks suffice? Who makes a call on what is an acceptable risk vs potentially beneficial outcomes - supranational bodies, governments, individuals, etc.. ? (problem 6)

So the better discussion to have is around the risks involved in deploying AI in socially sensitive fields, what is the minimum level of benefits it must bring about to make it worth us facing the challenges above, and how do we go about having that conversation.

Just branding the whole thing as snake oil is sidestepping the issue and largely unfair, because it's bypassing the actual conversations to be had and going straight for the outcome that Prof Narayanan - in arguably a biased opinion - has decided is what's best for us all.

Subscribe to our newsletter
Share this post
Share