After my previous post on OpenAI and the role of generative AI in general, I’ve decided to research some more about the capabilities of generative AI, and what it may or may not do for people writing professionally. Because I write a lot both professionally and for a hobby, it’s plausible to me that I will soon have to learn how to use generative AI anyway, so I decided to get a head start.
To do this, I’ve read two books. One of them was Hirusha Moragoda’s ChatGPT for Content Writers, and the other one was Nova Leigh’s Chat GPT For Fiction Writing: How To Build Better Fiction Faster Using AI Technology. I’ve also done some experimenting of my own (very little, for reasons that I will explain below), and I’ve read some other stuff online on the topic as well.
The books I have read are books I am not sure I can fully recommend to the reader – clearly, the authors have written them in a great haste, probably for commercial purposes. A lot of the deep theoretical issues of how generative AI works were not fully addressed and so if you’re not me and you’ve not gone about learning about this in detail you’ll probably not learn much about this deep lore stuff from these books. Moreover, generative AI is still evolving, and features are being altered and introduced as we speak.
What those books did contain was lots of practical details such as how to structure your work process, what different prompts to use, and so forth. So in that context these, or possibly future books like these, will be useful to you.
Now, before we discuss in more detail the use of ChatGPT in writing, we need to first discuss what we mean by writing and what we mean by use.
The computer cannot, and will not, serve as a Shakespeare or an Einstein. The so-called ‘AI’ machinery does not ‘understand’ the information that it holds. The absolute top-tier performance I’ve seen in this context is people who managed to get the AI to pass economics exams on their behalf. Which is very impressive. I cannot pass an economics exam at all.
(it was legally required that I place this here)
The machine will also not just magically summon complete texts into existence for you. A whole slew of students have now found out that they cannot just use ChatGPT to answer exam questions and not get caught.
However.
Most of the writing that we do in the real world is not Shakespeare and it is not Einstein. In the non-fiction world, millions of businesses and organizations depend every day on the ability of their employees to communicate in a concise and clear manner. Internal reports, ad copy, user manuals, law briefs, millions of pages of information. In the fiction world, there are incredible writers, both professional and amateur, that entertain their readers with untold millions of pages which are obviously not Shakespearean genius. (This is not to insult anybody who enjoys them – not everything I enjoy is Shakespearean genius either.) Even if you are Shakespeare, there’s probably some writing you’ve done that’s not necessarily the depths of creative genius (even actual Shakespeare had co-authors on some of his plays!)
And while AI cannot be Shakespeare or Einstein, it can do a lot of this ‘not Shakespeare’ work for you.
And this is where we come to the second word, that word use.
As we have already seen, in the example of the students who try to use ChatGPT to pass exams, ChatGPT will not just magically summon good texts for you.
Now, remember how there was a picture of a calculator in the opening of this text?
A scientific calculator will not help you solve calculus problems if you are as bad at math as I am. It will not magically summon solutions. The usefulness of a device like this increases vastly if you actually have a pre-existing knowledge of mathematics. If you think the fine folks at NASA don’t use calculators because they’re good at math, you are wrong – they use calculators a whole lot more than people like me who can’t comprehend math.
ChatGPT is exactly like a calculator, but for writing: if you are already more or less knowledgeable about the topic you’re writing about, and, more importantly, if you have pre-existing writing skills and you know how to create a structured text on the topic that you’re working on, you will get a lot more use out of it. (Would Shakespeare find such a machine useful if he were alive today? He found George Peele and Thomas Middleton useful!)
The mastery of ChatGPT is a lot like the mastery of command-line operating systems. Knowledge of the different prompts will allow you to become much more versatile in your use of the machine, and use it to adapt to many circumstances. The machine currently also has a feature that allows it to retain information about a given project (such as a novel or an article or a non-fiction book) and generate further bits of text in the context of that specific project.
Here are some of the things that ChatGPT (and, I assume, other generative AI software) can be used to do:
· It can be used to help you with brainstorming (there is even a dedicated ‘brainstorm mode’).
· You can use it to suggest names, lists of things (such as what you might need for a ‘listicle’ type of article or a report), or other kinds of tasks that require little creativity and lots of dull brain make-work.
· In fiction writing, it can suggest worldbuilding ideas, character traits, etc. It can also describe objects based on a prompt.
· In non-fiction work, it can develop technical and ad copy based on prompts, optimize articles for SEO, help transform your brief summaries into somewhat longer texts, etc.
· Another important function, especially for fiction writers but really for anyone writing a large texts, is that ChatGPT can function as an editor. It can read a large text you put in and not only make the usual proofreading suggestions but also make suggestions for improvement. Because most people are like me, and very bad at proofreading their own texts, this is incredibly helpful.
There are, however, some limitations I would suggest.
· You should avoid just copy-pasting large chunks of text from the machine into your text, not only on whatever ‘ethics’ grounds but because if you, for example, take a description of a location and add it in your novel, or a large chunk of descriptive text and put it into your article, then you’re basically married to that – that’s part of your setting now, or your non-fiction text. You need to now be consistent with that. You are as responsible for anything you took from a generative AI program as for anything you wrote yourself.
· You should fact-check items you take from a generative AI program, just as you fact-check items you read anywhere else. AI programs are as smart as their data set at best, and in some cases they are prone to ‘hallucinating’, i.e. putting in information that has nothing to do with reality and even references to sources that have never existed.
· In all items you should act as if you are responsible for anything you took from a generative AI program and added to your text, because you actually are.
Of course, while I have been reading and doing some testing with ChatGPT, there are other generative AI programs out there, and almost inevitably better ones will be released soon.
So, my findings are:
1. You need to be a good, or at least skilled, writer to use generative AI to its full potential.
2. You need to learn the system that you are using. There are books, articles, and even classes online to help you learn the use of the different prompts and the different features, as well as the general principles of the software.
3. You need to train and work on the assumption that you are responsible for anything you put in your project as if you had written it yourself. To put it bluntly, if you believe that using generative AI is not ‘cheating’, then you believe you are responsible for everything you have written using generative AI.
I myself have not yet gotten much done using generative AI, partly because I’m still not very good at it, and partly because, to be honest, I feel a bit of an aversion to using it. It feels kind of like cheating even though I don’t actually consciously believe it is in any way morally inappropriate or wrong, certainly not in terms of using it for writing things like SEO content. So there’s a bit of a discomfort there but I hope to eventually overcome it and get good.
As usual, I ask my readers to sound off. Are you using generative AI in any of your hobby or professional projects? What’s your experience? Are there any books, videos, or classes, that you would recommend?
There's a genre of "A.I." criticism that is shaped basically like: "of course it can pass an econ exam, the econ exam is in the training data". And we always eventually find the overwhelming majority of the exam in question in the training data. My own negative biases aside, at best it feels like an interpolator between different points in "text space".
Given this, the biggest item I haven't been able to come around on re: generative A.I. is the copyright problem. If we accept the concept of the "generative A.I." as "predicting the next phrase" (I imagine interpolating between points [training data] on an n-dimensional "text-space" graph) based on training data, all of that (largely copyrighted) corpus is encoded in the model like a really shitty zip-file. I think even in relatively copyleft worldviews, this is a big problem.
This is not unique to code, however: we see the same question with people. I read a lot. My memory is hazy sometimes, but I remember the gist of a lot of things, does that make my writing inherently copyright-infringing? I'd say no, unless I am particularly egregious (there is thankfully significant precedent to rely on), but a computer can regurgitate text more or less verbatim. And it can imitate style!
But, then again: "Friends, Romans, Countrymen, lend me your eyes, I come to bury A.I., not to praise it."
N.B. I believe I read that Bloomberg is using a narrowly trained model on financial reports and stocks to improve its financial performance. That being all either internal data or public information seems to get around my concern.