Everyone is talking about ChatGPT, the almost miraculous automated intelligence that does your homework and answers your emails. Phil Campbell explains some of the benefits and risks…
If you’ve ever used the ‘auto-suggest’ feature when you’re writing a message on your mobile phone, then you’re already familiar with the technology behind AI text generation. Suddenly, after years of seemingly futile research, Artificial Intelligence is everywhere. It’s creating artworks, it’s composing music (just say, “Make me a soothing song with a piano background and oboes’ and moments later, you’re listening); and most notably, right now the media is full of talk about ChatGPT, the genius level text generator from OpenAI.
Back for a moment to your phone’s auto-suggest function. It’s right there with auto-correct (which half the time is infuriating), helpfully offering a prediction of the word that’s statistically the most likely to follow on in the sentence you’re writing. Based on your own writing history, sometimes you’ll find you can save yourself some hunt-and-peck typing. When I type “The”, I’m offered three options… “only,” “first” or “new.” Just for fun, I kept choosing from the offered words, and ended up with the nonsensical but almost coherent sentence “The new law will allow police officers to report crimes committed during the past three decades.”
That, more-or-less, is exactly how ChatGPT works – though with a massive knowledge base that extends across almost every area of human endeavour. In effect, it’s like Google, except that it replies in fully formed sentences and paragraphs, in convincing human style. When I asked ChatGPT to explain itself in simple terms, it said this:
ChatGPT is a computer program that uses advanced artificial intelligence technology to understand and respond to natural language text inputs. It was trained on a large dataset of text from the internet, so it has a wide range of knowledge and can answer questions or generate text on many different topics. Essentially, it is designed to imitate human conversation to make it easier for people to interact with technology.
Sounds simple? The reality is, it’s an absolute game changer, already threatening to disrupt education, journalism, programming, and a slew of other areas. And, with access freely available through the website chat.openAI.com, there are already 100 million registered users. The cat is already out of the bag.
Better still, results can be delivered in a variety of styles. Ask for a eulogy in the form of a Shakespearean play, or a description of cardio-thoracic surgery in the style of a Dr Seuss book, and you won’t be disappointed.
But what negative disruption is technology like ChatGPT likely to cause? Here’s a summary from the horse’s mouth.
- Job loss: Automated systems like ChatGPT can replace human workers in customer service, data analysis, and other tasks.
- Misinformation: ChatGPT was trained on a large dataset of information from the internet, which may include false or misleading information. This could lead to the spread of misinformation if not properly monitored and filtered.
- Bias: As a machine learning model, ChatGPT can reflect the biases present in the data it was trained on. This could result in discriminatory or offensive responses if not properly addressed.
- Dependence: Relying too heavily on ChatGPT and similar technologies could lead to a decline in critical thinking skills and a loss of knowledge in certain areas.
- Privacy: The use of ChatGPT and other AI systems raises concerns about data privacy, as large amounts of personal information may be collected and processed by these systems.
In the education sector, opinions are divided. Some schools and universities have moved fast, banning the use of ChatGPT in drafting essays and assignments. Clearly, it’s a risk. Students can simply type the question into the query line, and watch while a unique fully formed answer scrolls down the screen.
Often, the answers are well researched, and perfectly formed. In fact, ChatGPT is progressively passing human-based exams. When researchers put ChatGPT through the United States Medical Licensing Exam — a three part exam that aspiring doctors take between medical school and residency — they reported that ChatGPT “performed at or near the passing threshold for all three exams without any specialised training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations.”
ChatGPT also passed four law school exams at the University of Minnesota. In total, the bot answered 95 multiple choice questions and 12 essay questions that were blind graded by four professors, intermixed with responses from human students. Ultimately, the professors gave ChatGPT a “low but passing grade in all four courses” equivalent to a C+.
Tellingly, a philosophy professor at Furman University caught a student turning in an AI-generated essay when he noticed it contained confidently-phrased misinformation. “Word by word it was well-written,” said the professor. Looking more carefully, he noticed the “student” made a claim about the philosopher David Hume that “made no sense” and was “just flatly wrong.” In other words, ChatGPT has all the common sense of the auto-predict function on your mobile phone – it selects the most likely words and ideas based on probability and syntax, without any real underlying ‘intelligence.’ Almost anything can happen.
The accuracy problems are evident when you ask ChatGPT to prepare a brief history of Scots’ Church Melbourne. Again, the prose is almost perfect. But the facts are badly askew, with the chat-bot confusing the first and second versions of the church building, and attributing the current building to the original architect and builder. Somehow, despite repeated instructions, the final output was still artfully incorrect.
Perhaps the biggest problem with ChatGPT – along with other big-data based efforts at generative AI – is the old adage “Garbage In, Garbage Out.” It’s a programming truism from way back, and it’s never been more apt, given the experience of Microsoft’s ChatBot in 2016 – after a few days of interacting with humans on Twitter, the bot was happily exchanging racist, sexist and hate-filled diatribe. That, on average, is what ‘humanity’ does with technology. Plus of course porn. It’s a running battle creating filters and algorithms to prevent our human-worst from staining our technical best. If we don’t stay alert, tools like ChatGPT will take us down a path of mis-information, narrowed perspectives and pre-curated content; ever decreasing circles of knowledge that will extend all the way down the cultural and intellectual drain.
Meanwhile, Google is moving fast to maintain market share. A new service (with the appropriately humble name “Apprentice Bard”) shares the same skill-set as ChatGPT, and will leverage Google’s massive search-engine advantage. Ultimately, we’ll all enjoy having our search results delivered in plain English prose, courtesy of a personalised research assistant. At the same time Microsoft has announced a huge investment in ChatGPT, with the aim of integrating AI features into flagship programs Word and Outlook. Stay tuned – in the next few months it’s more than likely you’ll be receiving bot-written emails, reading bot-written articles, and consuming auto-generated text whether you like it or not. I asked ChatGPT for a few final words for this article:
In conclusion, ChatGPT is a powerful artificial intelligence technology that has the potential to bring many benefits to various industries and education. However, it is important to also consider the potential negative impacts and take steps to minimise them. As ChatGPT and other AI systems become more prevalent, it will be important for society to have a responsible and ethical approach to their development and deployment. Additionally, continued research and development in the field of AI ethics and accountability is crucial to ensure that these technologies are used for the benefit of all.