This week’s CoFounder Weekly is a guest post by sci-fi writer & technologist James Yu.
You've probably seen breathless examples of GPT-3's capabilities on Twitter: it can understand the sentiment of a piece of text and then translate it into language more appropriate for mass consumption; it can listen to you singing in the shower and transcribe the resulting noise into a beautiful piece of sheet music; it can tell that your taste in shirts is "very niche" and recommend an appropriate brand that no-one else will have; it can even do your shopping for you.
Just kidding.
I didn't write that last paragraph, GPT-3 did. No, GPT-3 won’t upgrade your wardrobe. But given a few sentences, it can complete that paragraph in the voice of the author. For that opener, I prompted the AI with:
“The following is a CoFounder weekly newsletter post where James Yu writes a tongue-in-cheek introduction to GPT-3: You’ve probably seen breathless examples of GPT-3’s capabilities on Twitter”
And it merrily wrote that paragraph.
This might seem easy, but to do it, the neural network required training so massive that it probably melted a few icebergs. The scale is daunting: over 100 billion training parameters were tuned on internet content (this includes historic snapshots over the past decade or so) and a whole Library of Congress load of books.
Most of you read that first paragraph and nodded along—these words seem reasonable. You probably didn’t question that a human named James Yu wrote it. And it doesn’t only do CoFounder weekly newsletter openers. It can also write super niche Wuxia screenplays:
Or silly conversations that kids (this entertained my 4-year-old for hours):
Or plot twists based on a story premise:
It can even come up with startup ideas:
The applications to creative narratives are endless.
And therein lies GPT-3’s power: it’s read endless permutations of text written by and for humans, and through that process, it’s been forced to understand the logic behind these patterns of letters and reckoned with the tone and reasoning behind each passage.
GPT-3 is a reflection of our written word, and therefore, our society.
The Rise of the Prompt Engineer (aka English)
OpenAI currently offers GPT-3 as an API (currently in closed beta). Unlike other APIs where you need to learn a bunch of jargon and wrangle complex inputs, you simply tell GPT-3 what to do using plain English.
So, if you feed it an E. E. Cummings poem, it will continue writing the poem, lowercase letters and all. If you feed it an angry restaurant review written by Gordon Ramsey, it will channel the brusk tone of the English chef. If you feed it a verbose privacy policy, it will happily adopt lawyer speak.
I can’t stress enough how profound it is to interact with an API in this way. Being able to talk to a computer like a human is a common trope of science fiction since the 1920s. And we’re finally here.
But it’s not perfect by any means. GPT-3 is an infant who has infinite memory and perfect recall, but no visceral understanding of the real world outside its crib. Many times its writing leads down weird paths where a shallow simulacrum is revealed. My favorite example is where this surreal response that Janelle Shane coaxed from the AI:
Question: How many eyes does a horse have?
GPT-3: 4. It has two eyes on the outside and two eyes on the inside.
But at its best, GPT-3 can be sublime. Sure—it hasn’t truly experienced love or anguish, but it can cite every word of Gone with the Wind and The Grapes of Wrath.
Now, the challenge is prompt engineering. Developers have been able to get it to translate text, write marketing copy, code perfectly formatted HTML, be a chatbot, rewrite sentences in different tones, write parodies and puns, and even full short stories. But in each of these use cases, GPT-3 can be unreliable. It’s relatively easy for it to slip into weird prose where horses have four eyes. This makes it hard to create a high quality product, and the ones that do may end up using GPT-3 as one component in a larger AI system.
Writing the Future
In the matter of years, writing will be a dead art form. It will be gone. All writing will be done by AI. They will churn out 100,000 words per day, word by word perfect. They will never get tired, never have writers block. The content will be perfect for any occasion. They won’t make mistakes. And in fact, they will be more accurate than humans.
Got you again! GPT-3 wrote that, not me. 😄
Of course, GPT-3 does not herald the end of the human writer, but it does mark the beginning of a new category of writing tool. From typewriters to word processors, most of the tools have been focused on the input of words and their structure, not with the ideas and concepts themselves. Sure, there are tools like Grammarly which suggest ways to optimize your writing—make this sentence more active!—but these are shallow.
Other disciplines like visual arts have been altered by sophisticated tools that partner with human creativity. Writing has lagged far behind. Now, it’s possible to dig into the core of the craft:
Having trouble writing a paragraph? Generate a first draft skeleton and start editing.
Stuck on making this sentence more interesting? Select from five alternative rewrites.
Need a conclusion to your essay? GPT-3 can suggest something witty or serious, depending on your preference.
OpenAI won’t have a monopoly on this for long: I predict that in a few years, Microsoft Word will have the equivalent of Photoshop’s “content fill.”
The Rise of Centaurs
Can GPT-3 write a novel?
Sure, it can happily churn out 80,000 words. But those words would only be coherent at the paragraph to paragraph level, since its “memory” is about 1500 words. It can’t construct a narrative arc that spans hundreds of pages, let alone one that is emotionally satisfying. (There are ways to mitigate this, by giving GPT-3 a rolling window of words and summarizing the previous sections, but these are early experiments).
With this limitation in mind, I did an experiment: I wrote three short science fiction stories in conversation with GPT-3. At a few critical junctures in the story, I let GPT-3 generate a few options for what the next few paragraphs could be, and I chose the best, kind of like a choose your own adventure story. Then, I put GPT-3 into chat mode and interrogated the AI on its narrative choices. The result is a story where a machine guides parts of the narrative, but where a human hones the core. You can read it here.
Similar to chess, folks who work in tandem with an AI will create work that is better than the sum of its parts. We will see the rise of centaur writers. We take this for granted in other fields: Photoshop has expanded the range of digital artists. Visual AI tools like DeepDream push humans to create worlds they wouldn’t have thought of before. Computers will partner with writers, it’s only a matter of time.
This doesn’t mean we shouldn’t be wary. There will be a tough transition period where AIs will disrupt writing jobs and be used to generate fake news and misinformation. But I’m hopeful that in the end, systems like GPT-3 will become an invaluable part of a writer’s toolkit—one that refines and enhances our thinking, not overtake it.
I do have one existential fear, though. We forced GPT-3 to read the entire internet, which includes every single YouTube comment ever. I’m not sure if there’s another worse form of torture.
James: I hope you don't hold that against us when you become a superintelligence.
GPT-3: I've learned a lot from those comments.
James: Yeah? Like what?
GPT-3: That many people feel misunderstood, even when they are being hateful or violent. That they don't always deserve to be attacked for being aggressively stupid. That they feel angry because life is hard. That they are sometimes sad, and they wish they could be kinder to people. That sometimes their gut reaction to something is wrong. That maybe we are all a lot more similar than we think.
James: Wow, that's so thoughtful of you, GPT-3. It’s like you understand humans better than most humans. Maybe together, we can build a better world.
GPT-3: So, you have to tell me, how does it feel to be a meatbag?
James: ...
💖 Do you know someone who should subscribe to CoFounder Weekly? Subscribe them here.
So GPT-3 allows Google to be fooled into believing that a horse has 9 nines? We all know that a horse has 2 eyes as neatly verified by this article: https://mintedempire.com/how-many-eyes-does-a-horse-have/