Introduction
A few years ago, I gave a talk at a healthcare conference organized by Computer Sweden on the importance of AI for the future of healthcare. If I remember correctly, I described a Breast Cancer Detection model I had constructed with the help of annotated data. Some people in the crowd were impressed while others had seen this before and seemed more concerned with the menu at the free food served after my session. But, and as it often happens, there was this one guy who had paid attention and asked the right question, which was
“Well, we know that experts have screened all the pictures and made a call on each of them, but do you know exactly what the model does with that data?”
My first reaction was “Yeah! Of course I do. I built the damn thing!” (These words stayed in my head, of course), but I suspected that there was something deeper in his question. We both looked at each other with bemusement as I realized what he was getting at. Indeed, although the data is labelled (as it can be annotated by anyone) and the model code is relatively easy to follow by an analyst, what happens in the different layers is everything but evident and in many cases beyond human understanding. Also, what leads an AI system to the decisions it makes might be far from easy to understand.
As this simple but relevant question has stayed with me through the years, it was revived today by two events. A discussion I had with a couple of colleagues, as we were shooting a Data Spotter YouTube channel, commercial for Sopra Steria on how data can be used. The second one by reading an article by Silviu-Marian Udrescu & Max Tegmark, Symbolic Pregression: Discovering Physical Laws from Raw Distorted Video.
What are the issues and why are they important?
The advances made in artificial intelligence are beyond what we could have imagined just a few decades ago, at least as doable within our lifetime. We know now how to build ever more complicated models, not just applied to laboratory experiments, but also deployed in an ever growing number of domains, from industrial settings and to some extent in healthcare, to appliances, smart phones and social media. There are however a lot of questions that need to be answered about how they learn, perform or meet the goals that had originally been set.
Indeed, the learning process of many of these models is elusive to us, at best. I would even argue that they leave many of us in complete darkness, regardless of which end of the model we sit on (developers or users). Companies that simply integrate the models in their business are at an even bigger loss of insight since they do not even have access to the models themselves. They are at the mercy of black-boxes that potentially could end their business in the blink of an eye. Putting an AI in charge of critical decision making can be a gamble making companies wealthy or putting them out of business.
There are however domains in which this type of gamble cannot be allowed or risk to endanger lives or the democratic values that we hold dear. The pressure, for instance on the penal system, has lead to the idea that using AI to settle simple cases might be a solution to the painstaking and time consuming tasks that demand barely any skills. There is indeed enough data available to train an AI model to make decisions about the culpability or innocence of an individual and the necessary punitive actions to be taken. There are however arguments against these kind of tools that need to be seriously entertained. As it is true that many of the issues associated with an AI-judge carrying unacceptable biases could be due to the annotated data’s inherent biases towards certain groups, it is not self-evident that the fixing that issue (e.g. by balancing data) would be a guarantee for a fair AI-judge. The learning process taking into account millions of different features (in some cases) is not transparent enough for any valuable human insight and it thus becomes increasingly difficult to trust a tool of the sort. It is a paradox that the development of an AI tool aiming at facilitating tedious and seemingly easy task turns out to have a learning process too obscure for us to understand. In this case, it poses a serious threat to the legitimacy of legal systems and, as injustice may to some extent exist today, we at least know who to blame.
The same reasoning can be applied to social media platforms (e.g. Youtube, Twitter, Facebook) and search engines (e.g. Google, Duck Duck Go). We do not need to go as far as AI being able to create its own networks to see that the AI engines used today might be doing things that they never were intended to do or that we neither have insights in nor expected them to do. There are approximately 3 billion social media users today. The amount of data gathered is absolutely staggering. All these platforms do use AI models that analyze that data to offer us tailored searches, content, friend suggestions, what news we should see (or not see), expose particular types of articles instead of others. As it is most probably so that social media companies do to some extent act upon their models with their particular preferences (commercial or political), they are also left in the dark about many aspects of the AI’s learning. It poses some serious problems if the opinions and choices of billions are being influenced in a way no one has any control over, or any understanding of. If any changes are made, the output of these models can be very different. This is something that within AI-research is denote CACE or Changing Anything Changes Everything. To illustrate this, consider the field of object recognition, simply because it is easy to spot. The argument remains true of other types of data. The fields of image recognition and computer vision produce models of an extreme accuracy on natural images of the objects. This does not remain true if even imperceptible changes are made, that is changes that we wouldn’t notice.

Here, it is evident that the image was manipulated, but there are plenty of examples in which noise has “sneaked” into images without any outside influence (in other cases as deliberate acts). The problem of noise is not specific to images and it might be undetectable in information gathered on social media or in other settings. Detecting these changes is often impossible, at least before it is too late.
So, if we recognize that AI may behave in unexpected ways and that its learning procedures are sometime “dark magic”, it is reasonable to wish for AI to be more understandable, or intelligible, that AI tools are sufficiently transparent and that we always have ways to understand what they do, in which manner it learned its knowledge or skills and why it performs the way it does.
The need of epistemology in AI – not just in academia
We are somewhat imprecise about what we consider to be understandable or intelligible, but to be able to make statements about a models intelligibility one needs to have a concise definition of it. Before going anywhere in this discussion, we need to recognize that there are epistemological problems that need to be considered. This blog post being aimed at data scientists, it is only fair to give a loose definition of epistemology and then explain why this is so important. We shall loosely define it this way: Epistemology is the Theory of knowledge and more precisely, its nature and its extent.
Now, and also for the fresher people in data science, it should be noted that the task of discovering the nature of knowledge and how new knowledge is generated is not a new trend. This has been developed over centuries and has proved to be a litmus test for all research, especially in experimental science. There is a general consensus in science that statistical approaches to predict events should be reliable, valid, unbiased, and that they can be subjected to. This is true of most areas of research but particularly of those with a long academic tradition in which philosophy is tightly intertwined. Modern science is premised (by centuries of work) on the ability to critically scrutinize and results are supposed to be replicable. It is not in AI approaches. There are currently few tools for the critical appraisal of tools derived from AI. This comes mainly from the fact that these techniques are developed by individuals that lack a background in the theory of knowledge, epistemology. While it is true that the lack of knowledge of knowledge creation does not imply the inability to develop AI tools. After all, we did not start to fly airplanes only after having developed a complete theory of flight and aerodynamics.
I am however convinced that the addition of that discipline in AI will eventually lead to a greater acceptance of AI as a tool but also empower model developers (data scientists) in designing intelligible and explainable models (both concepts yet needing to be defined). I understand that developing AI tools demands and array of knowledge from a wide range of other disciplines (from mathematics to software programs and languages) and that the prospect of deep diving in philosophy might seem, at best, challenging. However, doing so will without any doubt give invaluable insights in the challenges that modern AI, with its ever deeper neural networks, faces. This is the core of the problem discussed in this blog post.
Why should it matter? Well, as all scientific areas that have ever been developed, there is no way around it and especially with a discipline such as artificial intelligence. After all, AI is the science of emulating cognitive processes and its focus is knowledge, whether passed on to machines or created by machines should be the focus of the endeavor. Another important aspect that should be understood is at getting stuck on techniques without the understand of the underlying knowledge does not give data scientists an edge and limits their advances.
As I hope these few words are an inspiration to turn to machine learning philosophy, I need to point out that the field actually preceded machine learning as we picture it today. Indeed, when people think of machine learning and AI they automatically think of a person coding a model on a computer. Old School Machine Learning was very much concerned with the questions posed by the nature of knowledge and how machine knowledge should be understood and explained.
What is intelligibility?
The above discussion shows that there is a large amount to be done, and that this work needs to be done in a near future. Indeed, our AI models are becoming increasingly complicated and as mentioned previously, neural networks are getting deeper. This implies that we should focus on what makes a machine learning or AI model interpretable , transparent, or intelligible. The use of these three term already pose problems. How do we defined interpretability? Is it “the ability to explain or to present in understandable terms to a human”? What makes some explanations better than others? What is intelligible? What is the difference between intelligibility, representability and comprehensibility?
This is a blog post and the topic we are approaching demands a book and the collaboration of a community. Work is underway and as natural as it is to learn coding, every data scientist would be doing increasingly better work in AI by making epistemological questions the backbone of every project and have these question in mind while developing models. It is important that the Machine Learning community revives these questions and approach them in a serious way and set the foundation, today, for what is an imperative for tomorrow.
Conclusions
There is absolutely no doubt that AI is here to stay and for the better. As I mentioned in a previous blog post, AI and the value of work, AI will not end it all for mankind and make any of us obsolete, nor is it likely to take over the world and enslave us, in part because we do discuss ethical, moral and legal issues concerning AI. There is however a real risk that future AI might be increasingly hard to scrutinize and understand. The human insights will with time diminish and we might be facing tools that we have absolutely no understanding of why they come to the conclusions at which they arrive.
This is a real problem that has far reaching consequences. No one wishes to hand over critical decision in the hands of a tool NO ONE understands. I have no issue sitting in a self-driving car even thought I do not have complete control over its every detail…because I know SOMEONE does. I would be dubious about it is I could google the names of some people that know precisely how they work. The immediate issue is a question of acceptance. A lot of actor in society know that AI would help them and for very basic things, they have applied AI tools (search engines, ad recommendations and so on) because they are not applied to enterprise critical decision-making.
For many enterprises, economical reasons are at the base of the reluctance to adopt AI-tools in some areas of their businesses because the prospects of economic losses due to poor choices for which no one can bear the responsibility i daunting. Other aspects may be legal ones. Who is to blame for injuries caused to customers? Others?
One area, in which the benefits of AI would be among the greatest, has so far been extremely reluctant to adopt AI. Healthcare. There are many reasons for this unwillingness and one of them is definitely the lack of understandability of how the models arrive at the decisions they make. Even if models are 99,99% accurate, they still do not provide any clue to why X is true rather than Y. It is also one of the domains in which the scientific tradition of replicability and human simulatability (can a human user predict the output for a given input?) is at the core of its success. No wonder that this community is dubious about the intelligibility of AI.
AI today and for its future would be well served by a structured foundation of knowledge and the acquisition of it by artificial intelligence. As for those not working in research, those on the front line that deliver fascinating tools, they would be well served by enriching their knowledge with a scientific foundation that would empower them. The risk is otherwise to forever remain methodologists that lack the necessary insights to understand the true value of their models. This power would even enable them to inspire trust on the receiving end (healthcare, companies, institutions).
I hope that this blog post has inspire you to look deeper, not just in neural network, but also in the foundations of AI, the human mind and the scientific methods that have secured so many successes.
I am really enjoying reading your well-written articles. It looks like you spend a lot of effort and time on your blog. I have bookmarked it and I am looking forward to reading new articles. Keep up the good work.
machine learning course training in vizag
LikeLike