Site icon Tharawat Magazine

The Gradual March of Artificial Intelligence

podcast-leveraging-big-data-to-reach-your-customers

Image courtesy of Michael Haenlein

Prognostications on a future with artificial intelligence (AI) range from slow and imperceptible change to a radical new world where machines rule the earth, but Michael Haenlein takes the middle path.

In Haenlein’s latest article, co-authored with Andreas Kaplan, “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence”, he suggests that AI will have a gradual but increasing presence in every aspect of our lives.

While there exists ample room for speculation in the rate at and ways in which we will adopt AI, its eventual adoption is without question. According to two recent studies, AI represents the most significant commercial opportunity around the globe currently. By 2030, AI is expected to increase the global GDP by 14 per cent – a margin that translates to an injection of approximately 15 trillion dollars.

Haenlein’s hypothetical future is informed by the past; to understand where AI is going, we must first understand where it came from, and AI is not new.

To counter the popular fear that AI stands to take over in some sort of global coup, Haenlein demonstrates that we’ve coexisted and benefitted from it for almost 80 years. Since its invention in WWII, AI has slowly but steadily grown into the spaces where it stands to improve our personal lives and workspaces.

We spoke with Michael Haenlein to learn more.

When was the beginning of AI as we know it?

For me, the beginning of AI essentially dates back to the end of the Second World War. The famous mathematician Alan Turing is credited with helping win the war by using a computer to do something that humans were not able to do – break German code. We call it a computer but bear in mind that it literally filled an entire room, and it took an entire day to make a calculation that your iPhone could do in a second or two.

He created what became known as the Turing test, which says: if you are able to interact with a computer for five minutes without being able to tell it is a computer on the other end, the computer has artificial intelligence. It wasn’t until the 1960s when Joseph Weisbaum developed software at MIT, known as Eliza, that a computer made any headway with the Turing test. Eliza “held up” for a few minutes, but was ultimately unable to convince interviewers of its humanity.

When did the conversation on AI enter the mainstream?

It first caught people’s attention in a serious way in 2015 when Google announced that they had developed software called AlphaGo. AlphaGo could play Go, a board game similar to chess but more complex. At the time, the entire AI community thought that developing a computer to play Go at the same level as a human was impossible.

Google organised a five-game series against the best human player in the world. For AI, victory would have been winning one of the five games; AlphaGo won four of the five games.

This victory was a significant indicator of how far machine learning had come. People were curious about how Google programmed the machine to play. As it turns out, they did not. Instead, they just had an AI tool watching people play Go. Then they duplicated the tool, and the two programs began to play against each other.

Through this process, AlphaGo’s game improved to the extent that it was able to beat the best human player, putting the concept of deep learning in AI into the mainstream. Shortly after, the first business use cases started to arise.

Lee Se-Dol, the Go player defeated by Google’s software, announced his retirement shortly thereafter, and in a way, became a prominent example of AI job replacement.

In today’s business landscape, what are some of the most immediate applications of AI?

When it comes to implementing AI, the guiding principle should still be to start with the business problem and then come to the technological solution later. Many FIRMS will ask, how can I use AI? My response is, invariably, what is the business problem you need to solve?

Look at the pain points and then put a hierarchy on them. Only then decide which ones can be addressed with AI and which ones cannot. Take a business, for example, that requires a significant amount of data converted from paper into electronic format: AI can employ an image recognition system that does this faster and much more accurately than humans.

Additionally, we are seeing a precipitous increase in chatbots for simple customer service interactions. It is estimated that in the not too distant future, 80 per cent of all customer service interactions will be addressed through artificial intelligence.

Many of today’s AI applications still require heavy customisation. Nevertheless, they allow businesses to streamline their operations.

Some have predicted that machine intelligence will eclipse the collective intelligence of humans. Is this scenario realistic?

Anything is possible. In the world of AI, things change every three to six months; I would not be comfortable predicting what will happen in 2021, let alone what will happen in 2045. However, this is not the first time someone made that prediction. In the 1970s, scientists said that within five to eight years, AI would be as smart as humans – still, we are not there yet.

I believe the AI revolution will happen gradually. Parcels of overall processes will be transitioned to AI over time. We don’t yet have systems powerful enough or cost-efficient enough to buy, maintain or customise entire commercial systems.

Recently, researchers conducted an experiment where they tried to train a robot to build an IKEA chair. The combination of reading instructions, using moving tools and visual recognition was so challenging that it didn’t really work. That said, I’m positive that, eventually, we’ll reach a place where AI is responsible for approximately two-thirds of our current tasks, but it will take time.

Is AI development the sole domain of scientific institutions, or is the private sector also getting involved?

Over the last 30 years, the focus has shifted to the former. Today, AI’s thought leaders are found in corporations; Facebook and Google are perhaps the two most likely settings. The problem is that these corporations do not let others look inside, so it becomes very difficult to use them as benchmarks.

There are many highly skilled people, but to pinpoint those skills in the ocean of small companies providing extremely specific solutions is what I see most companies struggling with right now.

When it comes to examining the future of AI, what is the micro, meso and macro perspective?

The micro perspective deals with decisions such as how to operate self-driving cars, or how automated X-ray analysis helps to decide who gets lung cancer treatment. Because it is a fast-moving target, AI is hugely challenging from a regulatory standpoint.

For the meso perspective, we come back to this idea of employment. Society will simply have to accept that a certain percentage of people may no longer have a job. This then may lead to a discussion of a cap on the amount of AI that firms can implement.

What I find most troublesome is the concept of regulation on a macro or governmental level.

In Asia, for example, AI is used extensively for face recognition in order to reward good behaviour and punish bad behaviour. These types of applications must be carefully considered before they are implemented. If they aren’t, we may unleash powers that are impossible to control retrospectively.

Exit mobile version