Interview with Dr Danko Nikolic, PhD

CDO and Head of AI at savedroid, CEO at RobotsGoMentalAssociated with Max Planck Institute for Brain Research

Will artificial intelligence develop emotional capabilities to a depth where it can duplicate human interactions?

Dr Danko Nikolic, PhD, has spent his professional life looking for answers to this question. As well as being associated with the Max Planck Institute for Brain Research, Dr Nikolic is the CDO and Head of AI at savedroid and the CEO at RobotsGoMental.

His focus is the explanatory gap between the mind and brain, and he believes a better understanding of the human brain will lead to the advancement of artificial intelligence technologies. Dr Nikolic’s groundbreaking theoretical work on how the physical brain creates perception opens the door to an emotionally intelligent AI experience.

Recently Tharawat Magazine had the opportunity to meet with Dr Nikolic to discuss where this cutting-edge technology is going, why it is so difficult to duplicate human interactions and what the widespread adoption of AI might mean for business.

How did you end up working in artificial intelligence?

For me, AI wasn’t the end of a journey – it was the beginning. I started my work with artificial intelligence when I was a teenager. I read a lot of books about AI, and I was disappointed by its impracticality. We need to know much more about the brain and human behaviour before we can build functional AI. This understanding compelled me to pursue a PhD in Psychology.

I conducted neuroscience research and studied the brain but always with the intent of building better AI. I kept asking the question, what does the brain have that machine learning tools don’t? At a certain point, I felt like I had enough information and returned to AI in a professional capacity as a corporate consultant. It’s an unconventional career path; I don’t think anyone else has followed the same trajectory.

Have you come to any conclusions as to how artificial intelligence could be approached differently?

Yes, absolutely. I have developed a theory on how we should approach both understanding the brain and creating artificial intelligence. The theoretical knowledge that we use to explain how the brain creates human intelligence and human behaviour is similar to the principles of building AI. Human intelligence and machine intelligence go hand-in-hand. A breakthrough in one field is a breakthrough in both.

My theory, the theory of practopoiesis revolves around the idea that our current understanding of the brain is incomplete, and our ongoing study of the brain applies to a better understanding of AI.

SPECIAL FEATURES: Almost Human - The Challenges of Building Emotionally Intelligent AI
Image via GettyImages

Popular perceptions of AI range from humanoid robots in sci-fi to the reality of our day-to-day experiences with chatbots and Alexa. Where do you see us on that spectrum?

There is a definite discrepancy between sci-fi, where we have robots that are almost human, and what we have today in Siri. Perhaps the most significant discrepancy, however, is between what Siri can do today and what might be possible in the near future. Bridging this gap fuels the work that I do. One thing is certain – we’re not going to be able to do it with the technologies we have today. We have a long way to go before we reach what I refer to as ‘biological intelligence’.

What impact do you see this having on the private sector?

That remains to be seen. It’s very hard to tell in the early stages of any technology where it will be applied most effectively in the future. When the computer was invented, for example, nobody could predict the extent to which we use it today. Excel sheets, word processors, video conferencing and smartphones were unforeseeable. They just knew it was going to be big.

The same is true for AI. We have a technology that’s getting more powerful every day. Many people are attempting to implement AI in different ways, so it is both useful and profitable. Nobody knows who will succeed and what the practical applications could be.

Regarding my work as the Head of AI at savedroid, we are creating a machine that will automatically build savings portfolios for cryptocurrencies. It’s still early, but it looks like we are getting somewhere.

SPECIAL FEATURES: Almost Human - The Challenges of Building Emotionally Intelligent AI
Image via GettyImages

[ms-protect-content id=”4069,4129″]

Your studies show it may one day be possible for AI to replicate human emotions. Will there come a point when artificial intelligence can substitute for a human in every professional context?

To answer this question, it’s important to understand what AI can do today, what it could do in the future, but also its limitations. Today, AI can competently recognise emotions. By reading facial expressions and listening to tone of voice, AI can accurately judge if a person is distressed. Where AI fails is in deciding what is to do next. As humans, we can recall a situation when we were distressed and use that as our guide.

AI, in principle, cannot recreate this. When we make these decisions, we use more than just our brain – we use our entire biological being. We feel it in our guts and with our skin. Goosebumps are an emotional response to external stimuli. These reactions subconsciously inform the decision-making process. Even the most advanced AI of the future will not have human skin, so, AI will never be able to make human decisions in the same way.

We want machines to make decisions instead of humans because it’s cheaper, quicker and often more reliable. However, the closer we get to the human decision-making process, it becomes exponentially more challenging because these crucial biological factors are hard to duplicate.

Do you see a prolonged resistance to the widespread adoption of AI or will people accept that this is the only way forward?

I believe the adoption of AI will be comparable to earlier technologies that changed our lives. There was a time when typesetters were losing their jobs because of the advent of digital word processing. It didn’t produce any significant societal upheaval, and we’re much better off now.

If you are worried about losing your job, ask yourself these questions. How much critical thinking does my job entail? Does my job require problem- solving? Can I daydream and do my job? Can I think about what I will do after work while I’m doing my job? If you’re unsure about the first two, and the answer to the latter two questions is yes, then you are in the danger of being replaced. If you have to focus and think on the job, then you are quite safe.

SPECIAL FEATURES: Almost Human - The Challenges of Building Emotionally Intelligent AI
Image via GettyImages

In replacing these jobs that are repetitive and easy to do, what social responsibility do we have to the humans who rely on them?

Of course, the transition will necessitate a degree of social responsibility. We have to make sure that we avoid social problems due to the proliferation of artificial intelligence. The relevant question is, what role do scientists have in social engineering? Fundamentally, this is a political discussion, so, educating politicians is essential.

My feeling is that Third World countries will be the worst hit. Globalisation has seen labour outsourced to the Third World because it’s more cost-effective. If you have AI and robotics doing this work, you won’t need as many humans, and that could prove problematic.

These decisions aren’t made in one place or by one group of people. There are massive economic and political forces at play. Out of this, some kind of new world order emerges. If we reflect on the historical adoption of other technologies and use them as a precedent, I’d say on average, there is more good than bad, but it’s never going to be perfect.

[/ms-protect-content]