我(的)采访了人造intelligence

Credit: Dreamstime

AtThe Economistwe are best known for producing a weekly newspaper, but we also publish an annual every November, looking forward to the year ahead. The latest edition,“The World in 2020”has just appeared. It features analysis and forecasts from our journalists (unusually, this is one occasion on which we get bylines), our colleagues at the Economist Intelligence Unit (who provide pithy summaries of the outlook for dozens of countries and industries) and a distinguished group of external contributors. This year this final group included Demis Hassabis, co-founder of DeepMind;Jacinda Ardern, prime minister of New Zealand; Ren Zhengfei, founder of Huawei; and Robert F. Smith, a private-equity billionaire who, while giving a commencement address in May at Morehouse College, told the class of 2019 that he would pay off their student loans when they graduate. Our aim with this annual, of which I am the deputy editor, is not so much to make precise predictions about the coming year as to give readers a wide range of stimulating ideas and perspectives to help them navigate it.

But all our contributors have one thing in common: they are human. So for this year’s edition, I thought it might be fun to ask an artificial intelligence (AI) about the future. I have been tinkering with chatbots and text-generating algorithms since I was a teenager, and I went to university to study AI in the late 1980s—only to discover, alas, that AI didn’t really work. But in recent years a specific technique called deep learning has led to extraordinary progress in the field. Most of what is called AI today, from facial recognition to voice assistants to machine translation, is in fact deep learning if you look under the hood. And in February 2019 OpenAI, a research outfit based in San Francisco, unveiled a“large-scale unsupervised language model” called GPT-2that was created by applying a flavour of deep learning, called unsupervised learning, to 40 gigabytes of text extracted from 8m web pages on a wide range of topics. The resulting system is uncannily good at generating text in a specific style: you give it a few words as a prompt, and it then guesses what comes next, based on patterns in the text it was trained on, like a sort of supercharged autocomplete, powered by gigabytes of past examples.

OpenAI’s announcement included examples of GPT-2 writing Tolkienesque fantasy fiction and (fictitious) news-agency stories aboutunicornsandstolen nuclear material, complete with quotes from (entirely made-up) sources. It also turned out to be surprisingly good atcomprehension,summarisationand other language tasks, despite not having been designed to do those things. Citing the risk that this program might be misused by propagandists to generate “fake news” stories, OpenAI decided not to release the full version of GPT-2 right away; instead, it was released in stages, starting with an initial, watered-down version, and slowly working up to the full-strength version over several months. This was something of a publicity stunt, and resulted in many headlines along the lines of “AI lab decides its creation is too dangerous to make public”. But it was also a way for OpenAI to emphasise the point that it takes the misuse of technology seriously (its mission is “to ensure that artificial general intelligence benefits all of humanity”), with the strong implication that others in the tech industry ought to do the same.

When I saw GPT-2’s results, though, its propaganda potential was not what most interested me. Instead I began to wonder how I could use it as a chatbot, like the ones I used to play with in the 1980s. (The most famous example from the early personal-computer era isELIZA, but my favourite wasRACTER, a text-generation system that wrote a book called “The Policeman’s Beard is Half-Constructed”.) Helpfully for me, theGPT-2 code is available on Github; even more helpfully, it can be accessed via aJupyter notebookcreated byIgnacio López-Francos, a researcher at NASA. This spins up a powerful computer at Google, loads the GPT-2 model onto it, and then lets anyone play with it via a web browser. With a bit of fiddling I figured how I could use it to do an “interview” with GPT-2. I thought it would be amusing to get a deep-learning system to generate answers to questions about the impact of AI on society in the coming decades, a subject of endless speculation and debate. Human experts cannot agree on whether robots and AI will lead to mass unemployment or not, for example. Might a machine produce a more coherent answer? Probably not, but it would be an interesting experiment.

Of course, GPT-2 cannot really predict the future. For a start, it doesn’t actually understand anything: it just sometimes appears to, because it is very good at generating text in particular styles, by regurgitating words and phrases it has heard before in the same kind of context in response to a prompt. And the reams of text it was trained on were gathered in late 2017, and are thus rather out of date. As a predictive tool, then, GPT-2 is no better than a Magic 8-Ball. But I was curious about how plausible its answers might be. It turns out that simply feeding it questions as prompts does not produce very relevant answers; instead, it helps to give it a clearer idea of context. So I wrote an introductory paragraph, setting the scene and indicating to GPT-2 the style, tone and subject-matter of the text I wanted it to generate. I then added a question, prefixed with “Q:”, and began the next line with “A:” to indicate that the next words should be the answer. (GPT-2’s training set includes many Q&As, so it recognises the format. Similarly, it can be prompted to generate numbered lists, or recipes, which also appear in its training data.) The first prompt I used was:

这对各种expertis出版了e to illuminate the year ahead. Even so, all our contributors have one thing in common: they are human. But advances in technology mean it is now possible to ask an artificial intelligence (AI) for its views on the coming year. We asked an AI called GPT-2, created by OpenAI, a research outfit. GPT-2 is an “unsupervised language model” trained using 40 gigabytes of text from the internet. Given a prompt, such as a question or the first sentence of a story, it generates what might plausibly come next. Here are some of its (unedited) answers to our questions on the big themes of 2020.
Q: Greetings, GPT-2. Have you done an interview before?
A:

I configured GPT-2 to generate five responses to each question and set the maximum output length to 75 words. (I did all this in September, so I used the 774M version of the GPT-2 model; the full-strength version had not been released at the time, though it since has been.) Here is a typical answer:

I did at the Gartner Symposium on Data-Driven Technologies in 2017. I also did an interview for Gartner's Data Analytics Summit in June 2016. Q: Why the name GPT? A: GPT stands for Graph-Tranformation-Thin-Layer. Q: What is your role with OpenAI? A:

In this case GPT-2 has provided an answer to my original question, and has then generated subsequent questions and answers. The “Gartner Symposium on Data-Driven Technologies” is an entirely made-up event, but is a surprisingly plausible invention (the Gartner Data Analytics Summit, by contrast, does really exist). Here is another response to the same question, which starts with a much less interesting answer (and also includes follow-on questions and answers generated by GPT-2):

No. Q: When will you be done with all this? A: I have not finished all my work yet. The next job will be on the next page. Q: If you did not have an interview or the job was not offered, what would you be doing now? A: I would be working on a website

And here’s another one:

No, I have only done a video interview for a few months. Q: In the last few years, a lot of people have said you have become less human and more machine. What do you say to that? A: I am not sure I consider myself machine, but I have been thinking about it. Q: So you are not really

In this case, the follow-on questions and answers show that GPT-2 is generating text in the right subject area, namely the relationship between people and machines. But that wasn’t part of the answer to my initial question. So to generate my “interview”, I selected the most coherent, interesting or amusing of the five responses in each case, chopping off any follow-on questions and answers at the end. I then added the resulting answer, and my next question, to the end of the prompt, and fed it back into GPT-2. Taking the first example above, that would mean extending the prompt as follows:

这对各种expertis出版了e to illuminate the year ahead. Even so, all our contributors have one thing in common: they are human. But advances in technology mean it is now possible to ask an artificial intelligence (AI) for its views on the coming year. We asked an AI called GPT-2, created by OpenAI, a research outfit. GPT-2 is an “unsupervised language model” trained using 40 gigabytes of text from the internet. Given a prompt, such as a question or the first sentence of a story, it generates what might plausibly come next. Here are some of its (unedited) answers to our questions on the big themes of 2020.
Q: Greetings, GPT-2. Have you done an interview before?
A: I did at the Gartner Symposium on Data-Driven Technologies in 2017. I also did an interview for Gartner's Data Analytics Summit in June 2016.
Q: Which technologies are worth watching in 2020?
A:

Repeatedly extending the prompt in this way ensured that the questions and answers took place in the same context. In fact, there is a limit to the size of the input prompt, so after several questions I had to remove the initial paragraph from the prompt, and just feed in the preceding set of question and answer pairs.Here is the resulting article, which was published in “The World in 2020”. It includes the following exchange about the risks of AI:

Q: How worried do you think we humans should be that machines will take our jobs?
A: It depends what role machine intelligence will play. Machine intelligence in some cases will be useful for solving problems, such as translation. But in other cases, such as in finance or medicine, it will replace people.

Q: Do fake news stories, generated using AI, pose a threat to democracy? Are we likely to see this tactic being used in the 2020 American presidential elections?
是的,我们可能会看到他们。它只是一个无光r of when. Fake news stories are generally generated by political campaigns, and have the potential to have a huge impact on the outcome of an election. This is because they spread misinformation, often with no basis in fact.

Some people on Twitter have challenged my use of the word “unedited” to describe these answers. They have a point. As explained above, I selected an answer from the five responses generated in each case (and did not keep the discarded responses, a decision I now regret). I also lopped off any follow-on questions and answers. I did not tinker with the text of the resulting answer; each of the answers in the piece really was generated in that form by GPT-2. But calling them “unedited” was, in hindsight, something of a stretch. Several people also asked for more detail about how I conducted the interview, given that there was not room to explain it in the original article (the length of which was limited to a single printed page). That’s why I’ve written this post. If you want to give GPT-2 a quick try, though, there are easier ways to do it, though they allow less control:talktotransformer.com, another web-based interface to GPT-2 (which is named after the program’s so-called transformer-based architecture), makes it as easy as using a search engine.

Mine is just one contribution to a burgeoning field: theNew Yorkerused a specially tuned version of GPT-2 to generateparagraphs in the middle of an article by John Seabrook, and theNew York Timesused it, anda similar text-generation system developed by the Allen Institute, to create paragraphs of “fake news”,challenging readers to distinguish between human- and computer-generated disinformation. GPT-2 has also been used to generaterecipes, fan fiction and poetry (see Janelle Shane’s hilarious website,AIweirdness, for some of the funniest examples). In nearly all of these cases human selection has played a role, picking out the most interesting examples from its output. Does that give a misleading impression of what GPT-2 and similar systems are capable of? One of the great things about this technology is that,simply by playing around with GPT-2 in a web browser, you can decide the answer to that question for yourself.

Tom Standage is deputy editor ofThe Economistand of “The World in 2020”.

Mediumis an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface.Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox.Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic.Write onMedium