Here’s a fun, quasi-philosophical question with the potential to trigger a major reassessment of some of our core ideals.
The last time you wrote an email, the software or web service you use probably suggested a few words to complete a sentence or phrase. If you hit the tab key to accept its suggestion, did you actually write that email?
It’s a question we are all going to have to answer soon. And not just answer it — we are also going to have to decide what it means for how we teach and assess subjects that require or rely on creativity.
Infinite Monkeys on Infinite Typewriters
I’m the head of artificial intelligence at Turnitin. I spend most of my days thinking about how machines and algorithms can and should “read” written materials. My team teaches computers how to evaluate and score language to detect plagiarized or inauthentic writing, how to give constructive writing prompts, or how to help instructors streamline and improve the consistency of their grading.
The work that we do has some common lineage with the auto-complete or auto-suggest features we see in our email software these days, and the tools we will likely see in other writing platforms soon. So, to the question: Who is the author of something that includes words, phrases, or entire passages suggested by software?
To start, it may be instructive to delineate between creative or expressive writing and written communication, wherein the latter could be the transactional nature of delivering information in written form.
Think of a business email confirming an appointment or notifying a supervisor that an employee may be late. Here, clarity of message is paramount and creativity, nuance, and originality may actually be counterproductive. Does it matter if the employee personally types every word of that note? Maybe not. And that means that for a lot of day-to-day written communication, AI-assisted writing can be helpful. If it can sidestep miscommunication and save time by auto-populating clear and concise syntax, the result is largely positive with little downside.
Where it quickly turns more complicated is in academic or creative settings or even in research, where authenticity and originality are exceptionally important. There still it may not matter if, in a screenplay, an AI predictor fills in “me back?” after the phrase “Can you call.” After all, the AI suggested that ending based on what it predicted the writer was going to type. In other words, “me back?” is likely what was intended anyway — that a computer filled it in and the writer approved it probably matters very little.
But what if, after the screenwriter types, “can you call,” the AI suggests “me by my real name?” And what if, at that moment, the writer thinks, “Well, that’s interesting, let’s see where that goes?” When that happens, whose original idea is it? Who is actually writing?
It is not an entirely philosophical exercise because right now AI can already do more than complete phrases based on good guesses. Recent advances in text generation mean that AI can now write entire paragraphs or even pages of material based on a single prompt. AI can give you a thousand “original” words on any topic, nearly instantaneously. And while we can probably all agree that something generated entirely by AI does not and should not represent whosever name is on the paper, the real and forthcoming challenges are more complicated than that.
Does an AI Deserve an A Grade?
The first issue: How will we know when writing is AI-generated?
Today, AI-written material can already fool humans because humans read for meaning and AI’s command of semantics has already matched or exceeded human ability. But other AI systems are still fairly good at picking out computer-generated text because it can be trained to look for statistical patterns in syntax structures that computers use, like repeated verb arrangements and similar adjective usages. But as the AI gets better at emulating the statistical signature of human writing, that won’t always be the case.
The second issue: How do we teach and assess writing skills in an AI-writing world?
What do we do in the more common cases that do not match the extremes — where the AI does more than complete an everyday phrase but less than write a whole passage? What then?
Even if we are good at spotting it, how do we evaluate it? In a classroom, is submitting a paper that is 30 percent AI-assisted cheating? What about one that is 60 percent? Does the author of a novel of which 20 percent is “written” by AI deserve credit or copyright protections? Does the AI engineer deserve some credit?
The point is that sorting out the value and validity of the end product — assuming we can reliably detect it — is going to be a big task. And the consequences of AI-assisted writing will change the way we teach written communication, too.
Is the Future of Writing in Its Past?
I already see a future in which, in business school for example, students are taught not only to use AI-writing tools but how to manage and deploy them across broad communities of customers, investors, and employees. Where the goal is consistency, clarity, and efficiency, but not necessarily personalized style.
Down the hall or across campus, in liberal arts arts classes, the lessons might be dramatically different. There, instructors who value creativity and originality in style, thinking, and communication might find themselves in pitched battle against these well-intended, but increasingly powerful, technology tools. Some instructors may argue they already are. Unable to stop the spread and improvement of AI writing, might classes that rely on the creative word distance themselves from contemporary technology or from writing altogether — reverting to hand-written, in-class, observationally created prose or oral storytelling?
Such a shift would make the mastery of writing fundamentals such as grammar, syntax, and diction a much more specialized skill than it is today, while skills such as critical thinking, argument construction, and source usage become the center of gravity for what most learners and employers consider “writing.”
Not That Pandora, the Other One
While this future might seem far-fetched, history suggests otherwise. Consider that not long ago most companies employed enormous departments where people did quantitative work almost completely manually, while today the definition of a quantitative job is almost always better described as supervising automated computations by dictating high-level computational objectives and leaving the nuts and bolts of the math entirely to the machine.
It is an inevitability that computer-assisted writing will become a ubiquitous and indispensable part of human communication. Our responsibilities as educators and builders of educational tools requires us to reimagine our concepts of creation and assessment, knowing that to do so opens up a Pandora’s box of questions around originality, integrity, and the shifting role of writing in our society, and the place for increasingly advanced AI in writing education.
Such a shift would turn evolving notions of teaching with technology on their ears. Remember the argument that there is no point in learning cursive handwriting because no one writes by hand? No point in knowing how to write a check because of online banking? No point in learning to add or subtract because of calculators or how pointless it is to learn any fact because Google? What if, to incentivize and verify independent thought and expression, you could not use any of those things?
I know, that is a great many questions.
And that’s my point. If they are not here already, these questions are coming and we would be wise to start considering answers soon because the better AI gets at filling in answers, the more people accept it and use it, the more we will eventually have to reimagine our concepts of creation and assessment, which are two pillars of teaching and learning.
And that’s fine, maybe even good. But it is probably — the AI thought I was going to say “not great” just there — not optional.