The Scholar and his Assistant: The Emergence of the Academic Co-pilot

Word count: 1891

Progress has always been synonymous with change. Change in our norms and approaches. Change in our focus and priorities. And change in the tools and tasks that make up the jobs we do. From the first time humans ploughed a field with a horse to the last time we did so, our progress has meant placing tried-and-tested on a collision course with new-and-improved every time. This has never been as true as in today’s technologically advanced world. As innovation pushes the advancement of the tools that we use relentlessly forward, our priorities and our approaches are forced to change with them. Recently, we have seen the rapid advancement of artificial intelligence tools and their release to large swathes of the public. Popular reactions to this range from awe at their capabilities to dread of the potential consequences of their widespread adoption. The latter prompting hurried responses from universities around the world, fearful of students passing off AI-generated work as their own (Barton, 2023). This essay aims to address this chasm of opinion by looking at the rise of AI assistants, their use in the academic workflow and the future direction of large language models’ (LLMs) integration into our world.

The Development of AI Assistants

Artificial intelligence (AI) is quickly becoming a term that we all hear on a daily basis. From automatically adding flight information from an email to your calendar, to explaining how to make a traditional dish out loud in the middle of your kitchen. These software programs that use natural language processing and machine learning to interpret queries and commands and offer useful solutions or services continue to improve. The more advanced ‘assistants’ range in scale and scope from virtual assistants like Apple’s ‘Siri’ to chatbots that can provide customer service and technical support. It is important to distinguish between Narrow AI and General AI. Narrow AI preforms a focused task, such as detecting that you have a boarding pass in your inbox, extracting relevant information about times and locations and entering it into your calendar as an event. General AI (AGI) on the other hand is that which aims to “carry out any cognitive function that a human can” (Chace, 2018). While AGI has not been realised, AI assistants are the early predecessors of these general intelligence systems. 

The rise in recent years of these assistants can be attributed to advancements in technology, many developed within some of the largest corporations on the planet. While many of these are tailored to making the average person’s daily life easier, the potential applications of this type of technology are far reaching. The founder of Wired, Kevin Kelly (2014), asserts that “the business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI”. This speaks to the universality of AI’s applications across human society. It is clear, therefore, that as these assistants become better at interpreting and delivering on tasks and problems, they become fundamental to our everyday lives. 

In November 2022, OpenAI released ChatGPT (GPT) a so-called chatbot that can understand conversational text queries and prompts and respond with human-like answers. Many descriptions of this AI exist from various sources, however it may be better to hear from GPT itself.

Screenshot from chat.openai.com

It is these impressive and informative responses that have led GPT to gain a large userbase and lots of media attention in a very short space of time. “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the public” (Roose, 2023). In a most general sense, its function is to emulate human-like text. However, this does not capture the extensivity of the tasks it can perform. While the platform is still new and more use cases may emerge, GPT has written songs and poetry, debugged programming code, translated between languages and written a fictional novel (White, 2022). It is clear then that these capabilities have the potential to revolutionize the way we access and process information.

The Modern Academic Workflow

Before discussing AI’s role within knowledge work, it is important to provide a framework within which this role can be explored. Calum Chace (2018) outlines the four fundamental skills that make up this work – “looking, reading, writing and integrating knowledge”. He asserts that AI is already helping in all of these areas and that its use is widening and deepening constantly. As GPT is a text-based tool (i.e. can only take text inputs and only output text), this framework can be slimmed down to reading, integrating and writing. For the purpose of this paper, these will be labelled as processing, creating and synthesising respectively. This is to reflect the evolving nature of knowledge work in the digital age, where the traditional concepts of reading, writing, and integrating continue to be transformed, taking on multimedia dimensions.

Processing

One of the most obvious uses of GPT is the processing of large bodies of text (Gefen, el al., 2021). For this, a scholar can input a body of text (i.e. a research paper, book chapter, newspaper article, etc.). They can then outline keywords and phrases, along with the themes, topics and issues they are researching. Having input this information, the scholar can ask for a number of outputs. These can include highlighting of relevant passages in the text, providing a list of quotes to be exported and extracting the sources used by the author of the input document. Beyond this, GPT can provide a summary of the text, pointing out the supporting and opposing information in line with the input themes, topics and phrases. Furthermore, where a topic involves a complex concept or simply one that is new to the scholar, GPT can be asked to define or simplify the concept described in the text. It is this ability to interpret, explain and provide context for information that makes GPT more advanced than simple summarisation tools. 

Processing Workflow (Ben Smith, 2023)
Creating

Creation of text is where GPT stands alone and this is what makes the system a “game-changing” development (ibid.,2021). Firstly, a scholar can input the key terms or phrases associated with a research topic and GPT can output coherent bodies of text that relate to these terms. This can be used as an idea creation or development tool, guiding the direction of further research. GPT can also be provided with a draft body of text written by the scholar and asked to correct it for grammar, vocabulary and structure. This can be taken further by prompting the system to alter the text for a specific tone (i.e. formal, assertive, friendly) or to enhance its coherence, readability or accessibility to the reader. Finally, and most impressively, a scholar can input an outline of a paragraph of text (e.g. the points to be included in a conclusion, along with a quote, anecdote or reference) and GPT can output a written draft of that paragraph. The ability to alter not only grammar or vocabulary but to rephrase and restructure bodies of text, and even generate text based on an outline from the scholar, is invaluable. This can enable a scholar to tailor different versions of the same argument to different audiences. For example, a research paper can be written to be submitted for peer review. The same core argument can then be taken and rewritten as a series of tweets to be posted on the scholar’s account. It can also then be revised further into a talk to be given to secondary school students, factoring in their age and approximate knowledge level. This ‘mode’ of GPT (i.e. creation), beyond simply streamlining the writing and idea generation workflow, can fundamentally change the way scholars interact with their work.

Creating Workflow (Ben Smith, 2023)
Synthesising

Synthesis involves the analysis, integration and interpretation of multiple sources of information (Linn, 2018). As already outlined, GPT has the ability to extract useful or relevant information from bodies of text. However, synthesising multiple textual sources takes this to another level entirely and involves recognising patterns, identifying common themes and generating insights about the relationship between bodies of text.  This enables a scholar to input disparate documents and find commonalities or discrepancies between the arguments they contain. GPT can also be input with, for example, three arguments from the same academic viewpoint, extract the major points from each and organise the combined argument coherently. This enables a scholar to sift through multiple sources, for example while conducting a literature review, and quite quickly develop an understanding of the breadth and depth of research that exists on a topic. Furthermore, this ‘mode’ can allow a scholar to input a number of different drafts of the same piece of work and analyse the patterns, trends and disparities between them. This can help in making decisions on choice of phrasing and inclusion of points or sources. This ability to synthesise information means that a scholar is able to look beyond what is immediately obvious in terms of trends and commonalities and, in simple terms, offers a second ‘pair of eyes’ when analysing information. 

Synthesising Workflow (Ben Smith, 2023)

The workflow of the academic clearly has the ability to be enhanced by the development of these new tools. GPT summarises these major enhancements below.

Screenshot from chat.openai.com
The Future of Academia

As outlined, GPT has the ability to play a role in all areas of the academic workflow. However, limitations do exist. This system is, in early-2023, extremely new and represents the first of these tools to be put in the hands of the general public. Sam Altman (2023), CEO of OpenAI, admits that the companies approach in releasing GPT was to do so “while (it is) still somewhat broken”.

While GPT exhibits breakthrough technological advancement, it is not 100% accurate and can often ‘hallucinate’ answers that simply are incorrect. This can be said to be a positive though, as human intuition and independent fact-checking are required to effectively use this tool. 

As GPT matures, there are a number expanded features that are needed. Firstly, a character limit exists. Abidi (2023) states that an input limit of 4096 and output limit of 2048 characters applies currently. The release of this limit would inevitably increase the workload the system could take on. Secondly, GPT’s training data ends in 2021. Updating this, again, would expand its capabilities. Furthermore, the system could be given access to institutional data from universities or publishers, or allowed access to the internet. Finally, integration of the system with a computer vision system that can analyse visual data would complete the system’s ability to engage in all four skills that make up knowledge work (i.e. looking).

Extrapolating the role of AI in the academic workflow outlined in this paper, we can make educated assumptions on the nature of academic research into the future. There will always be a place for the design of AI tools, however it seems that this, for a large proportion of sectors, will become secondary to the ‘prompting’ of AI systems to perform tasks. An analogy comes to mind. Every builder, instead of measuring, transporting or operating machinery,  becomes an architect, prompting AI systems that act out the ‘labour’ to the vision of the architect. In academia, research in this scenario becomes about ‘pointing’ AI systems at an issue or topic, defining the processes to be used and the variables to be included or excluded and pressing *RUN*. 

As this future speeds towards us, academic institutions must keep pace with this rapid advancement. These tools must be embraced and integrated into the work of the scholar. The alternative is tantamount to refusing to accept a free tractor because your horse can plough the field just fine.


Abidi, Y. (2023) “ChatGPT character limit explained,” Candid Technology, 9 February. Available at: https://candid.technology/chatgpt-character-limit/ (Accessed: February 19, 2023). 

Altman, S. (2023) Twitter, 19 February. Available at: https://twitter.com/sama/status/1627110893388693504/ (Accessed: February 19, 2023).

Barton, G. (2023) “The rise of ChatGPT should liberate education, not scare it,” TES, 13 January. 

Chace, C. (2018) Artificial Intelligence and the Two Singularities. 1st edn. Boca Raton: CRC Press. 

Frankish, K. and Ramsey, W.M. (2018) The Cambridge Handbook of Artificial Intelligence. 1st edn. Cambridge: Cambridge University Press. 

Gefen, A., Saint-Raymond, L. and Venturini (2021) “AI for Digital Humanities and Computational Social Sciences,” in B. Braunschweig and M. Ghallab (eds) Reflections on Artificial Intelligence for Humanity. Springer International Publishing, pp. 191–202. 

Kelly, K. (2014) “The Three Breakthroughs That Have Finally Unleashed AI on the World,” Wired, 27 October. 

Linn, A. (2018) “Microsoft creates AI that can read a document and answer questions about it as well as a person,” Microsoft Blog, 16 January. Available at: https://blogs.microsoft.com/ai/microsoft-creates-ai-can-read-document-answer-questions-well-person/ (Accessed: February 22, 2023). 

Roose, K. (2023) “The Brilliance and Weirdness of ChatGPT ,” The New York Times, 18 January. 

Russell, S. (2020) Human compatible: Artificial Intelligence and the Problem of Control. New York (N.Y.): Penguin books. 

White, M.J. (2022) “Top 10 Most Insane Things ChatGPT Has Done This Week,” Springboard, 9 December. Available at: https://www.springboard.com/blog/news/chatgpt-revolution/ (Accessed: February 21, 2023). 

Ben Smith
Ben Smith
Articles: 15

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php