#

On Artificial Intelligence

by Paul Edmon, November 20, 2019

Currently in the realm of computer science there is no hotter topic than Artificial Intelligence (AI) and Machine Learning (ML).  However as this slide deck from Professor Arvind Narayanan at Princeton notes there is also a lot of snake oil going around too.  Thus an understanding of AI is important in our current day and age.

The most important thing to know about AI is that it is not actually intelligent.  AI is a marketing term, and while machines appear to be learning they really are not.  Instead AI is simply, to put it crassly, a very sophisticated best fit line generator.  Essentially it takes data which may or may not be actually related and generates empirical best fits.

As it turns out this works well for a lot of things.  After all much of science and engineering is based on empirical best fits that we know work but we do not know why, or alternately we know why things are related but the physics is so complicated that an empirical best fit is much easier to generate.  Empirical best fits work well for a host of things that are complicated but have general trends they follow.

However there is a great danger in this as well.  After all just because you can drive a best fit line through something or generate a correlation does not mean they are actually related or causal.  The classic example is the correlation between Number of Pirates and Global Warming.  In more classical logical fallacy terms they include the fallacies of cum hoc ergo propter hoc, post hoc ergo propter hoc, and is-ought fallacy.

Even beyond these logical fallacies is the fact that even if things are correlated and are in fact related, the causal chain may not be obvious.  Humans are good at deriving narratives from our own internal best fit line generator (i.e. daily experiences, worldview, and stereotypes).  AI gives the veneer of objectivity and thus allows us to drive our own narratives through the data.  This is dangerous as these false narratives greatly simplify a very complex system in ways that are not true.  Human narratives, experiences, worldviews, and stereotypes are exceedingly useful for explaining and understanding the world around us.  However they have great power to also distort perception, generate false conclusions, and lead to dangerous unwanted consequences.

In the end AI, similar to any computer algorithm or system, is only as good and accurate as the person writing and using them.  Beyond that even if all the logic is solid and sure, logical systems are critically dependent on their assumptions.  Thus even if your logic is correct you can arrive at the wrong answer because your basic assumptions were wrong.  With AI even more so as the data used to train AI algorithms needs to be carefully curated by skilled data scientists.

As with everything, skepticism should be the first step.  If it sounds too good to be true, it likely is.  If a result confirms your biases be sure to double check the solution, as the result may be false.    AI is a powerful tool to be sure, but it is simply a tool that can be used or abused.