Read about the important and exciting developments in Ontotext as we are closing up 2018.
This is the first interview of our series #semtechtalks, led by Teodora Petkova (Twitter @TeodoraPetkova). In the course of two months, Teodora talked to people passionate about machine learning, smart data, Artificial Intelligence (AI) and all things technology that currently change our world.
Enjoy the talk!
Professor Robert Dale runs the Language Technology Group, an independent consultancy providing unbiased advice to corporations and businesses on the selection and deployment of Natural Language Processing (NLP) technologies. Until recently, he was Chief Technology Officer of Arria NLG, where he led the development of a cloud-based natural language generation tool. Having authored and edited seven books and 140 papers in various aspects of natural language processing, currently prof. Dale is interested in automated writing assistance; practical natural language generation; the engineering of habitable spoken language dialog systems; and computational, philosophical and linguistic issues in reference and anaphora.
Teodora Petkova: Why is it so hard to explain to a computer what something is? To use a framing by Kristian Hammond, what is AI’s main language problem?
Robert Dale: Two things: the fact that machines don’t have relevant background knowledge and don’t have the relevant inferential capabilities. Even when you’re explaining something to a very young child, you rely on the fact that the new knowledge you’re providing is building on top of existing knowledge. You also rely on the fact that the child can make certain kinds of generalizations and inferences, and perform various kinds of abstract reasoning. You can’t rely on those things being there when you’re explaining something to a machine, so you have to be much more explicit.
But although we rely on these things when we talk to people, that doesn’t mean we have good mental models of just what those human capabilities are. For the most part, we’re just very good at intuiting the right level of explanation. And because we don’t have a clear understanding of just what those human capabilities amount to, we don’t have a clear understanding of what it is that machines don’t have. So that all makes it harder for us to know what needs to be explained to a machine.
Teodora Petkova: Why do you think it so hard for most of us to talk in a cool-headed way about AI, what is our problem with that?
Robert Dale: The current wave of interest in AI has generated a number of fears, two of which I think are most prominent. The one that probably gets most press is the ‘hip pocket’ fear: people are concerned that AI is going to steal their jobs. I think that’s a real and valid concern. I don’t find the claim that AI will free us all up to do more creative work to be particularly convincing. It seems pretty clear that AI offers a step-change in the automation of a range of relatively routine tasks. Or, to put it another way, AI has changed our understanding of what counts as routine.
So I have no doubt that, as time goes on, AI technologies will have a massive impact on the job market. We need to be exploring solutions like the Universal Basic Income and we need to find other ways of giving people meaning in their lives that don’t depend on work in the conventional sense.
The second fear about AI is perhaps less immediate and more abstract: the idea that machines will make us redundant in a more existential sense. That’s the worry that AIs will be super intelligent beings, smarter than us, and that they’ll decide there’s no value in having dumb humans around, so goodbye human race. I think this is a valid concern too, although not in quite the way that it is usually framed.
I think what’s more likely is that we will gradually, and without really realizing it, accord more and more decision-making power to algorithms whose built-in value systems may not quite line up with more humanistic values. And that raises the possibility that a machine may value human life less than a human might. Think back to HAL in Kubrick’s 2001 refusing to open the pod doors to allow Dave’s escape because it would endanger the mission: that observation of the risks of machine intelligence was very prescient.
Teodora Petkova: What’s new in the era of human-hybrid intelligence and what will never change, no matter the advancements of what we might call “thinking machines”?
Robert Dale: I’m one of those who believes that we’re a long way away from replicating in a machine the full range of capabilities that we call intelligence in humans. So, I think hybrid intelligence, where we achieve the best results by utilizing a combination of human and machine intelligence, is how we’ll see AI being used for the foreseeable future. I wrote about this in TechCrunch a couple of years ago, and my views on that haven’t changed much since then (see The era of AI-human hybrid intelligence).
But I do think we will find more and more things that can be automated, so the burden of effort apportioned to humans vs machines will shift over time. It’s also likely that competitive pressures will cause system-wide changes that prefer automated solutions. So, just as we’ll make self-driving cars smarter and more capable, we’ll also be redesigning road and transit systems so they pose fewer problems for those cars.
I think we’ll see that kind of ‘symbiotic change’ across a wide range of areas, and it will have the effect of dampening the need for human assistance for intelligent systems. The risk there is that we’ll end up with a world that is optimized for machines, and maybe less interesting for us – which just comes back to the need to be very clear about what value systems we are trying to promote.
Teodora Petkova: In your work, how do you explain what’s in it for enterprises when we talk AI and what is the way forward to “embracing” AI?
Robert Dale: Obviously there’s a lot of hype around AI right now, but when you strip away the hype, you’re really just talking about the next level of automation. So, whether we call it AI or something else, the value remains the same: by automating more and more tasks, we can reduce the costs of carrying out those tasks, and so AI offers enterprises the potential of a competitive edge. It’s not unusual to talk to companies who are a bit confused about what AI is. Not surprisingly, with that confusion comes a certain amount of trepidation. Making clear that we’re basically talking about making processes more efficient through advanced automation helps allay the fear of the unknown.
Teodora Petkova: When did you first hear about the Semantic Web?
Robert Dale: I became aware of the Semantic Web around the early 2000s through the influence of Frank van Harmelen, a friend who was a Ph.D. student in Edinburgh at the same time as me. Frank was in there at the start and went on to co-author the Semantic Web Primer, which was the first academic textbook in the field. So, the Semantic Web has been in my consciousness for more than 15 years, but I have to admit that for a long time I wasn’t convinced: I couldn’t see people coding up websites in terms of structured information, for example.
Of course, things have changed in the interim: on the one hand, we’ve seen the development of things like Linked Open Data, and on the other hand, we have ever-smarter text processing tools that reduce the need for human involvement in adding structure to unstructured data, resulting in graph knowledge bases like Ontotext’s. It’s that combination that has really made the Semantic Web a feasible proposition.