Dear Editors, ChatGPT (OpenAI, San Francisco, CA), an artificial intelligence (AI) chatbot of the generative pre-trained transformer (GPT) family of language models and released on the 30th of November 2022, is, for better or for worse, already changing the academic landscape [1]. Be it by allowing easier access to information or by performing simple or even somewhat complicated tasks, it has already begun to revolutionize how we work. But what exactly is ChatGPT? The best way to introduce ChatGPT is to allow it to do so itself. When asked what it is, ChatGPT replies,“I am ChatGPT, a large language model trained by OpenAI. I am designed to respond to natural language text inputs and generate human-like text as output. My primary function is to assist users with answering questions, providing information, and helping with various other tasks.” ChatGPT has already begun to make research easier by helping the scientific community produce works of decent quality and even adding references that, when followed up, are generally shown to be correct. Recent tests have shown that it is also able to generate abstracts that are very difficult to distinguish from real abstracts and avoid committing detectable plagiarism [2]. So, is this the end of scientific research as we know it? Probably. Should we be scared? It would seem to depend on your point of view. Some fret that this may be the death knell for academic integrity due to the ability to cheat with ease by having Chat-GPT write an essay or sit a virtual exam, with some recent events making the news for exactly this reason [3]. Others, such as Prof David Oppenheimer, point out that cheating in academia has always existed and, as such, those who will use the tool to cheat are probably the same students who would have cheated anyway [4]. Some Australian universities have decided to accept that the age of AI has arrived and to try to work with it, hoping to eventually teach students how to use it, rather than fight against it [5]. We have found ChatGPT to be extremely useful in helping us collate data and find answers to questions in a way that although achievable by a simple literature search, is far less arduous using ChatGPT. We would describe the experience as being akin to talking to that one extremely knowledgeable professor we all remember from university who dazzled us with their vast array of expertise and ability to lay out years of wisdom succinctly and comprehensibly and then elaborate on any point thereafter. However, just like that professor, despite laying out a compelling and generally plausible answer, ChatGPT is occasionally wrong. ChatGPT may be able to help us overcome our coding deficiencies and help us expand the scope of our work. A practical example of this is the use of ChatGPT to code a simple image classification program using python, which it can do with relative ease, allowing even a novice to create basic models.
Another use case of ChatGPT for the radiologist is looking up normal criteria or classic signs in certain pathologies. In our experience, it provides fairly reliable information about common pathologies. One pitfall, however, is that although ChatGPT provides correct information, it may also provide completely fabricated references (Fig. 1). Another example of a basic mistake is ChatGPT stating that a portal venous phase injection is performed by injecting contrast into the portal vein and acquiring the images after 10 min to 15 min (Fig. 2). With a slight rephrase, ChatGPT gets closer to the answer, though still imperfect (Fig. 3). ChatGPT is currently far from a perfect model, being hit and miss and requiring follow-up to ensure the veracity of provided information …