ChatGPT isn’t a buzzword. It’s much more than a shiny new toy having its hype phase. It is a disruption. And what the conventional education systems, particularly the higher education system, like the least are disruptive toys.
Why? Because disruption breaks conventions. It finds its way past blanket bans and roadblocks. It demands rethinking and realignment of the system’s focus and its mechanics. ChatGPT – GPT read as Generative Artificial Intelligence (AI) – is doing exactly that and more. So, how do education systems respond to disruption? They evolve.
Two AI in Education announcements were made last week. At home, the Higher Education Commission of Pakistan (HEC) announced the formation of a committee to devise an AI policy framework for the country’s higher education institutions to combat misuse of Generative AI. The framework is meant to provide guidance regarding the “ethical and legal consequences of widespread adoption of ChatGPT and other Generative Artificial Intelligence tools.”
And away from home, the Russell Group (an association of the UK’s top twenty-four public research universities) has published a set of guiding principles for universities to “promote AI Literacy” among students and staff. The objective is to help universities “capitalize on the opportunities technological breakthroughs provide for teaching and learning,” and equitable and ethical deployment of AI in Education.
If one can read the contrast, it couldn’t be starker. Yet higher education stakeholders in both cases may opt for the same strategy because Digital and AI Literacies are arguably the most robust guardrail against disruptive technologies.
AI is a hard reality of not only the education system but society at large. Widespread use of ChatGPT and similar applications, integration of Generative AI models with conventional technologies, and its rapid adoption by the people, offers transformative opportunities. However, it entails many risks too. The concerns being voiced by educators, plagiarism, misinformation, shallow thinking, etc, have consequences beyond the academic sphere.
With Generative AI finding its foothold in the public space, we are exactly where we were at the turn of the century when AltaVista gave general people the ability to widely search the internet’s information indexes in 1995, or when Wikipedia made encyclopedias a literal public good. Plagiarism checking tools and citation guidelines saved the college term paper.
However, unbounded access to information and connectivity had far-reaching economic, social, and political consequences. And to take them into account, education systems started paying attention to digital skills and literacy.
The risks of misinformation or cognitive biases may be an extension of those associated with the internet in general, but the challenge now is far greater. To understand these challenges and why AI Literacy, which builds on Digital Literacies, is the most reliable solution, we need to understand what artificial intelligence is and how it works, which itself is a foundation of AI Literacy.
Let’s keep things simple and consider ChatGPT use in the educational context only. What are educators’ common views on its output and usage? The output may be grammatically correct, read normal, or even logical, but it may be based on ‘made-up’ information, non-existent sources, and could also be biased. Some technologists like to say that ChatGPT, or Generative AI in general, ‘hallucinates’. Why? Because of how it generates output.
As the name suggests, Generative Artificial Intelligence can produce new content in multiple forms, textual, audiovisual, etc. It uses deep learning, neural networks, or machine-learning techniques to trace trends or patterns and relationships in existing data to create new human-like content in response to an input prompt. To put it simply, it generates realistic content in response to a query, in a natural language, by analyzing patterns in a related dataset.
That’s how ChatGPT can instantly write an essay on a given subject that a human would take hours of sifting through accessible information, synthesizing, and writing up. However, when subjected to cognitive biases in the data or when the data is limited, it hallucinates. It simply ‘makes up’ content which lacks a valid information base and real-world understanding.
Sometimes, it can be totally nonsensical. It may be subject to a bias for or against an idea, perspective, group, etc because of the biases inherent to the data it trained on. Notably, Generative AI models are generally trained on North American datasets, thus, output reflects the same views, ideas, trends, etc.
Research shows that users like using ChatGPT vs Google’s traditional search engine because the former provides more ‘fun’ responses. And that most users just rely on whatever it says instead of evaluating and critically analyzing the responses – the problem of shallow thinking. And that’s why AI Literacy puts up the guardrail. It forces them to think it through. It helps users understand the limitations and biases of AI which are by design. It helps them evaluate what it can and cannot do and how they should interact with it.
In short, for AI in Education to work and Education for AI to be efficient, we need AI Education – Digital and AI Literacies. Because AI is not just a technical but a societal phenomenon, we need more than just AI experts. We need AI literate citizens capable of leveraging these disruptive technologies efficiently and ethically for all the good they can bring. Digital and AI Literacies are fundamental to the human intelligence required to navigate artificial space.
Zahra Mughis, "AI in education: need for digital literacy," The News. 2023-07-31.Keywords: Education , Education systems , Higher education , Educators , Researchers , Pakistan , HEC , GPT