The GPT-3 paper includes evaluation of zero-shot/few-shot performance across a wide range of tasks, but I fear that unless Is a Glitch” poem (with AI-generated rock music version), & “Where the Sidewalk Ends”. Highlight the Navy Seal & Harry Potter parodies, the Devil’s Dictionary of Science / Academia, “Uber Poem”, “The Universe In addition to the Cyberiad, I’d personally Poetry, which we humans find so difficult & impressive even as adults. Struggles with commonsense reasoning & factual knowledge of the sort a human finds effortless after childhood, but handles well things like satire & fiction writing & Turns out: a lot! Below, I walk through first impressions of using GPT-3, and countless samples. We content ourselves with mediocre generic poetry, at best, deprived of finetuning directly on chosen poetry corpuses or authors we might like to parody? How much Naturally, I’d like to write poetry with it: but GPT-3 is too big to finetune like I did GPT-2, and OA doesn’t (yet) support any kind of training through their API. Fortunately, OpenAI granted me access to their Beta API service which provides a hosted GPT-3 model, letting me spend a great deal of time interacting with GPT-3 and writing things. What can we do with GPT-3? Here, we’re all about having fun while probing GPT-3’s abilities for creative writing tasks, primarily (but far from limited to) poetry.
The scaling of GPT-2- 1.5bīy 116× to GPT-3-175b has worked surprisingly well and unlocked remarkable flexibility in the form of meta-learning, where GPT-3 can infer new patterns or tasks and follow instructions purely from text fed into it.
Scaling works: quantity is a quality all its own. Then going beyond them in a fascinating new way. GPT-3 is like GPT-1 and the GPT-2 I’ve used extensively before 1-only much more so, and The latest and greatest neural network for unrestricted natural language generation is OpenAI’s GPT-3. You enjoy them even a tenth as much as I enjoyed testing GPT-3 and watching the completions scroll across my screen.
GEOMETRY DASH WORLD VAULT OF SECRETS CODES YOURE HOPELESS HOW TO
This page records GPT-3 samples I generated in my explorations, and thoughts on how to use GPT-3 and its remaining weaknesses. I was impressed by the results reported in the GPT-3 paper, and after spending a week trying it out, I remain impressed. Theyĭemonstrate an ability to handle abstractions, like style parodies, I have not seen in GPT-2 at all.Ĭhatting with GPT-3 feels uncannily like chatting with a human. GPT-3’s samples are not just close to human level: they are creative, witty, deep, meta, and often beautiful. (Along the way, Iĭocument instances of how the BPE text encoding unnecessarily damages GPT-3’s performance on a variety of tasks, how to best elicit the highest-quality responses, common errors people make in using GPT-3, and test out GPT-3’s improvements in NN weak points like logic or commonsense knowledge.) It, while being versatile in handling poetry, Tom Swifty puns, science fiction, dialogue like Turing’s Turing-test dialogue, literary style parodies… As the pièce de résistance, I recreate Stanislaw Lem’sĬyberiad’s “Trurl’s Electronic Bard” poetry using GPT-3. Normal way, but one engages in dialogue and writes prompts to teach GPT-3 what one wants.Įxperimenting through the OpenAI Beta API in June 2020, I find that GPT-3 does not just match my finetuned GPT-2-1.5b-poetry for poem-writing quality, but exceeds GPT-3, however, is not merely a quantitative tweak yielding “ GPT-2 but better”-it is qualitatively different, exhibiting eerie runtime learning capabilities allowing even the raw model, with zeroįinetuning, to “meta-learn” many textual tasks purely by example or instruction. I continue my AI poetry generation experiments with OpenAI’s 2020 GPT-3, which is 116× larger, and much more powerful, “‘It Was The Best Of Times, It Was The Blurst Of Times’‽”.Harry Potter And The Methods Of Rationality.