It’s been hard to avoid ChatGPT lately (and more broadly, the class of “artificial intelligence” tools to which it belongs). There’s breathless coverage everywhere, and it seems to be split almost evenly between rapturous claims about AI’s promise and warnings about the apocalypse its use will bring. The reality, of course, is somewhere in between. It’s possible to use ChatGPT as a tool to write more efficiently and effectively, and it’s possible to do so ethically – but doing so takes some careful thought.
That careful thought has to start with understanding how ChatGPT works, and thus what it can do (and what it can’t). ChatGPT is a “large language model”: a statistical model that (to oversimplify a little) is very good at producing English text that sounds like a human wrote it. It does this (again oversimplifying) by taking a “prompt” and knowing what words, in the huge corpus of online writing it has previously digested, usually follow that prompt. Initially, ChatGPT’s prompt is what you ask it; then as it produces text, the prompt for each new word or phrase is the passage that precedes it. If this sounds rather like the “predictive text” feature on your cell phone, that’s because it is: just much, much more powerful.
Knowing how ChatGPT works matters because it leads to the first of two absolutely crucial insights you must come to, before you can use it as an effective and ethical writing tool. That insight: ChatGPT does not know anything. ChatGPT produces plausibly phrased and styled text, but it doesn’t know or care whether that text is true. User after user has been shocked to find that, as a result, ChatGPT “hallucinates” (or invents) supposedly factual statements that aren’t true. In academic writing, that includes misstatements of all kinds, as well as citations for statements true and false that sound plausible but don’t actually exist.
Here’s the other absolutely critical insight about ChatGPT: you are responsible for any mistakes it makes. Just as the woodworker, not her chisel, is responsible when a cabinet door won’t close, you are responsible for whatever bears your byline – no matter what tools you used to write it. If your paper includes an imaginary citation, the fact that it came from ChatGPT doesn’t excuse your including it. What comes out of ChatGPT – or any writing tool, for that matter – is only a suggestion, something you can consider using in your manuscript; but you (and your coauthors) must make the decision to use it and then stand behind that decision.
With these two insights behind us, we can put ChatGPT to work. Consider first how to use it effectively. ChatGPT can’t be trusted to generate content, so don’t ask it for a literature review or to critically evaluate an idea. Instead, think of it as an idea generator and as a style assistant. Here are some writing tasks that take advantage of what ChatGPT is designed to do:
- Ask ChatGPT for a list of hypotheses one could test with a set of available data.
- Ask ChatGPT to suggest three alternative “pitches” for the importance argument in a grant proposal.
- Give ChatGPT rough notes and ask it to provide a full-text draft.
- Give ChatGPT a rough draft and ask it to polish the grammar and syntax, revise the structure to improve logical flow, or make it more concise.
- Give ChatGPT an informally written draft, and ask it to rewrite more formally, specifying that the result should be in the style of papers in your field. (Be careful, though: ChatGPT is very good at reproducing the writing patterns of past literature, and many of those patterns are bad.)
- Give ChatGPT a formally written draft, and ask it for a more engaging, less formal version. Much of our literature could use this treatment!
- If you’re using methods you’ve written about before, give ChatGPT some of your old Methods text and ask it to paraphrase. Academic writers chafe at the need to avoid “self-plagiarism” in repeated descriptions of similar methods, but it’s always possible to rephrase.
- Give ChatGPT a manuscript, or its Abstract, and ask it to generate a plain-language summary. You can even specify a target reading level.
All these (and more) are tasks that take advantage of ChatGPT’s strength: its ability to say things that sound like things people say. In every case, of course, you’ll see ChatGPT’s output as suggestions to consider, not as text to be absorbed uncritically into your work. Is ChatGPT’s version better than yours? Sometimes, it will be, and that’s great. Is it worse? Often it will be, and you’ll discard it. Or does it suggest an alternative you hadn’t thought of before?
I hope you’ll agree with me that we should use our tools not just effectively, but also ethically. That includes ChatGPT. Because AI writing tools are new, there’s some confusion about what is and isn’t fair game in their use. I think that can be resolved by applying the same ethical standards you’d consider for any other writing tool. First, take responsibility. Check ChatGPT suggestions carefully: it isn’t your coauthor’s, or a peer reviewer’s, responsibility to find and fix the nonsense it may generate. If something unfortunate makes it through, accept the consequences with grace – including prompt retraction of a paper, with apologies, if it should ever come to that. (Fortunately, if you’ve followed the advice above, it won’t come to that.) Next, don’t misrepresent your work. Some journals require disclosure if a manuscript was written with AI assistance; if they do, disclose it. Yes, it’s a little strange, as the same journals don’t care that you used Word’s spellcheck tool or had grammar advice from your brilliant housemate Aditi; but if they ask, disclose. If you have coauthors or supervisors, it’s probably best to explain your ChatGPT use to them too. Finally, as for any tool, consider the environmental and social costs. For ChatGPT those include the carbon footprint of the considerable computational power you’re harnessing, the fact that the software was trained on a corpus of human-produced work whose authors aren’t acknowledged or compensated, and the psychological toll on human workers who have been tasked with finding and labelling hate speech, violent threats, and more in that corpus. To be clear, all tools have costs like these; an ethical tool user understands and weighs them.
Does this all sound like ChatGPT might be a difficult tool to master? Perhaps it is; but so is an atomic force microscope, or a bandsaw, or an oboe. With some thought and some practice, you can have ChatGPT expand your writing skills without leading you astray.