Despite Explosive Growth, NLP Is Not Taking Your Job Anytime Soon

It is everywhere. The headlines, touting terms like Glove, Bert, GPT, etc., claim that robots can now read or write as well as humans do. The articles talk about the advances machines have made in processing human language. They talk about how, thanks to work done by bodies like OpenAI, the machines are now coming for our jobs.

The truth is more nuanced. Despite the tremendous progress made in this field over the past two years, machines are nowhere close to us humans when it comes to understanding or writing language. I should know — my team has been trying to teach them since 2012. If we dig deeper, there are five breakthroughs necessary for an artificial intelligence to compete with humans. Let us look at each of these milestones:

First, is the ability to process text. Machines got this in 1970s if not before. It is something like saying that this sentence has 11 words. It may sound trivial, but technologies based on such text processing powered Google’s search till 2015. Analysis of text’s sentiment, the auto-suggest feature on browsers, auto-correct on your phones and many mainstream tasks related to language have used simple text processing until very recently. Some still do.

Second, is the ability to parse a sentence into clauses, subjects, verbs, and such. Things in natural language are inherently ambiguous. If a machine comes across a word, say ‘report’, does it classify it as a noun or a verb? By 2010 solutions by Stanford and other prominent research institutions were working well. Today’s Siris and Alexas use a lot of this idea. For example, if you ask, “who is Julius Caesar,” you get “Julius Caesar was…” Essentially, the agent parses all available sentences, finds one with the right structure, and plays it back. Grammar-correction tool in most word editing software is another example.

Image for post
Image for post

Third, is the ability to resolve polysemy. Polysemy is a word’s quality of possessing multiple meanings. ‘Mars’ may mean the God, the planet, the candy company, the candy itself, the gene, or the missile. We humans get the right context without effort. Machines are still learning this. When they are successful, every language related task becomes more accurate and powerful. Look at Google search today. Intelligent chatbots are also products of similar technology. So are translators between different languages, detectors of emotional tone or various intelligent tools for business automation.

However, there are some fundamental problems here that need more thought. For example, can the machines learn to recognize the different abstractions of an idea? For a human, the ‘Caesar’: the Roman Emperor in 50BCE; is different than ‘Caesar’: any European king, or ‘Caesar’: a word with origin foreign to English. This kind of differentiation is inherent to our understanding of the world and how we communicate about it. Whereas scientists who program artificially intelligent machines are only beginning to think about this problem now.

Fourth, is the ability to detect lies. Humans lie all the time; to be kind, to be polite, to be witty, or to use a figure of speech like an idiom or a metaphor. The idea conveyed by a text is frequently different than its literal meaning. Successful communication still happens — the receiver can easily tell the intended idea: he is dumb, and not an electric appliance; from the literal fact: ‘he is not the brightest bulb.‘ To get a machine to do this, we need to find a way for it to absorb all relevant truths in its context and make up its own model of the real world. Unless it knows the truth, it cannot detect lies.

We have focused on machine’s ability to read or understand language, but the ability to write or generate language is developing in parallel. Some of the early efforts were template based — think fill-in-the-blanks. Then came products like Columbia University’s Newsblaster. These arrange existing sentences to achieve some goal, like creating a summary. Today there already are tools that can translate, answer pointed questions or generate simple sentences. The ability to detect mismatched expectations, along with a better knowledge of the real world will make machines more adept in composing language.

The final milestone is the ability to use lies. It is one thing to be able to tell between the real idea and the literal text. It is entirely another to generate language using such devices. To use humor, satire, similes, or outright lies is a matter of executive function. We choose to say, ‘Tomorrow’s another day’ instead of ‘I am hopeful despite recent setbacks,’ to a group of people who we presume understand the context. The prowess to make these decisions is likely to develop in other fields first and then be applied to language. Machines may still not rival Shakespeare but can pass for humans in most situations at this stage.

It is easy to see that each step is more complex than the previous by orders of magnitude in terms of algorithmic sophistication, the computing power necessary and the data it will take to train. Today, machines are barely on the third leg of this journey, and the journey is only getting harder. If we use human life as an analogy, despite all their advances machines are only infants when it comes to processing natural language (also called NLP).

AI, Product, Strategy, Digital Transformation

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store