If you write seriously you’ve probably had occasion to look up the meaning of a word in a dictionary, and you’ve probably gone to a thesaurus for the the word that expresses precisely what you want to say.
If you write with a computer word-processing program you may have used its online dictionaries, online thesauruses, and similar aids, and you have possibly allowed your writing to be corrected by helpful extras that detect mis-spellings and grammatical errors. You may have consented to have your prose ‘improved’ by programs that detect weak verbs, excessive use of the passive voice, and so on.
Are you wondering what’s next? Will it ever be possible to instruct a computer to compose meaningful, grammatically correct, and idiomatically proper English on some subject with no further intervention from the human writer?
The answer is yes, sort of, and the development has some implications for genealogists.
Programs designed to write prose are called ‘chatbots‘. One of the more successful is ChatGPT, an Artificial Intelligence (AI) tool currently being trialled, which, given an initial text prompt, is able to produce prose that continues the prompt.
It is sometimes claimed that chatbots are able to understand human language as it is spoken and written, and on this basis have the ability to compose meaningful prose. This is not so; a chatbot constructs sentences using patterns it has detected in text it has examined, constrained by rules imposed by its developers.
Chatbots seem clever, but their output is unreliable, often misleading, and from time to time egregiously and dangerously false.
A new group on Facebook has been set up to explore the ChatGPT tool from a genealogy perspective [https://www.facebook.com/groups/genealogyandai].
To demonstrate the dangers of using chatbots, my daughter had a ChatGPT chatbot write a biography of Christine Anne Young—me—complete with cited sources.
It produced a biography without a single correct fact. The sources were entirely made up.
[The chatbot was given my unmarried name.] My date and place of birth were wrong, and my education and achievements were given incorrectly. Publications said to be mine were falsely credited to me.
‘Champion de Crespigny’ is reasonably easy to research, for very few people have this surname, so I am confident that ChatGPT is not confusing me with someone who has the same name.
ChatGPT cannot be relied on for research. It is a prose-construction writing-aid, which exploits language-pattern recognition. It does not understand facts. It produces authoritative-looking text, but it cannot be trusted. It is certainly not a substitute for real research.
Further reading
- Hughes, Alex. “ChatGPT: Everything You Need to Know About OpenAI’s GPT-3 Tool.” BBC Science Focus Magazine – Science, Nature, Technology, Q&As – BBC Science Focus Magazine, 16 Jan. 2023, https://www.sciencefocus.com/future-technology/gpt-3/
- Bowman, Emma. “A New AI Chatbot Might Do Your Homework for You. But It’s Still Not an A+ Student.” NPR.org, National Public Radio, 19 Dec. 2022, https://www.npr.org/2022/12/19/1143912956/chatgpt-ai-chatbot-homework-academia