FYI: The full article is available with Heise Online and the directors cut version in German is available on my blog.

Harry Potter probably learned without AI. Nevertheless, he was drawn here by an AI, wicked!

For most students, twice a year is a kind of state of emergency: the exam phase. Six months of learning condensed into two weeks full of performance assessments, trying to learn as much as possible at the last minute and then putting it down on paper shortly afterwards. The omnipresent artificial intelligence is increasingly influencing how and what is learned – and, above all, how knowledge is tested.

ChatGPT and co. have already arrived in the middle of studies and are shaking up the system: Generative AIs not only help us in technical subjects with research and summarize studies, but also optimize the structure of CAD models or create images of different types of defects in production facilities.

Technology and the countless tools offer a wide range of possibilities for juggling knowledge and information, trying out new approaches to old challenges and being able to use acquired knowledge more efficiently.

However, many universities and lecturers remain in a kind of state of shock and reflexively narrow the discussion down to one striking problem: the potential for unfair advantage in (online) examinations and especially in term papers and theses. The supposed solutions are not very original and merely attempt to prevent what can no longer be prevented: that previously complex Bachelor’s and Master’s theses are quickly written by or with the help of generative AI.

And that’s a terrible shame! Because such measures, which aim to preserve the status quo for as long as possible and confine us students to the good old internet and traditional search engines, waste energy and valuable potential. But we students need a university that encourages and supports us in exploring AI for our own purposes. And we need teachers who use their knowledge to casually remind us again and again of the limits of technology and its fallibility.

Help for self-help

Regardless of whether lecturers and professors actively plan for the use of tools such as chatbots in their courses, prohibit their use or still push the same slides over the overhead projector as they did twelve years ago: Students will use artificial intelligence one way or another to extend, facilitate and vary tasks. And this is happening – at least in my experience – across the entire student body: regardless of native language and previous technical experience, the possibilities and access methods have arrived for everyone and in every subject.

It becomes more difficult when it comes to assessing whether the use of a chatbot leads to helpful results in the respective subjects or not. Because while ChatGPT confidently explains the principle of emission spectroscopy in experimental physics in several languages, it suddenly gets onto very thin ice when it comes to arithmetic problems in higher mathematics. Too thin to answer questions correctly; too thin to be used as the only learning method and source of knowledge for exams. While computer algebra systems and simulation programs are used to providing a predictable output for a given input, chatbots are not necessarily known for providing the most consistent answers.

The fact that ChatGPT vehemently answers the question of whether 9.11 or 9.9 is greater with “Of course, 9.11 is greater than 9.9!” not only reveals that the language models lack an elementary understanding of numbers (and that this lack cannot be compensated for by statistical skills). Worse still, the language model constructs a chain of reasoning that embellishes the false with context and thus makes it even more credible.

Universities would actually be the perfect place to examine the new technology critically and in all its facets. Instead, they seem to be in a kind of state of shock at the moment – either they assume that students don’t use ChatGPT and its ilk, or that they have a good grasp of AI and don’t need any help in classifying the results. But they are wrong: many students take what AIs generate at face value, as the Bavarian Research Institute for Digital Transformation was able to prove in an analysis.

Room for experiments

But what will certainly not help us now is a new subject for dealing with AI or an AI driver’s license that teaches us how to use AI promptly. Much more helpful would be concrete examples in the lectures of each subject area that vividly convey how AI can be used in a targeted manner. AI must be taught in context and in passing. Repeated reminders such as “Be careful, a chatbot can’t draw correct free-form images” would help many people immediately.

Critics might now retort that students at the highest level of the German education system should be able to assess such risks themselves. But such an ability to judge can only arise if space is opened up for discussion, away from headlines and short videos with TTS voices promising the “killer prompt for PERFECT stoichiometric calculations” and the stars from the sky in 15 seconds. And above all, without the threatening backdrop that teachers sometimes create.

Threats from some lecturers, such as “Anyone who uses ChatGPT in this course will be kicked out”, or demotivating remarks such as “In the future, you’ll be unemployed anyway thanks to AI, so why bother at all?” are simply counterproductive. As Prof. Jörn Loviscach rightly pointed out: “Technology and materials” are not enough to learn successfully. It also requires space for exchange and opportunities for unevaluated application, which is exactly what tutorials at every technical university have been doing for decades to clarify open questions and prepare for exams.

What is still performance?

Some universities are already adapting to the new wave of technical possibilities. Particularly in the humanities, which traditionally focused on text production, the first universities have already abolished term papers and bachelor’s theses. It is only a matter of time before technical degree courses are also forced to follow suit. Autodesk has recently integrated an AI assistant directly into its in-house CAD software and, with Text to CAD, there are now even approaches to decouple entire design works from manual work. We have to ask ourselves: How can work samples still be assessed fairly in the future? Be it the supporting structure of future civil engineers, the crankshaft of mechanical engineers or the electrical speed controller of electrical engineers.

Probably the simplest solution to this problem is “more exams”, and at the moment there are many indications that universities are taking this route. This is very unfortunate, because rarely has there been a better opportunity to interweave old, tried and tested concepts (written exams, oral exams) with newer ideas (assessed peer teaching, Socratic seminars or group projects with peer review) and deliver real added value for students. It is time to finally play these trump cards and not hang the seventh exam into the second semester because the documentation work otherwise spread over half a year has seemingly become pointless due to ChatGPT.

Despite all the discussion about the technical aspects, the topic of equal opportunities must not be neglected. Free versions of relevant AI tools may offer a good introduction to the topic, but they quickly reach their limits when it comes to complex workflows and issues. This is why universities themselves should provide modern chatbots, AI tools and computing power for working with and training artificial neural networks, similar to what is already common practice with classic software: we are given access to the latest CAD and EDA software suites, licensed for huge sums of money. Computer pools allow us to use powerful workstation computers, only for the world of artificial intelligence it seems as if they want to wait and see if and what will catch on before purchasing licenses for students. It cannot be that well-off students can buy better chatbots and thus gain an advantage over the less well-off. And since chatbots are used at least as much as the library (if not more!), a portion of our semester fees should go towards access to the modern language models.

Neuland, here we come!

Just as we young people received the former German Chancellor’s statement “The internet is new territory for all of us” with a certain smile in 2013, we must now actually realize: “(Generative) AI is new territory for all of us” – and before we lose touch, as with the broadband expansion, it is worth exploring the terrain together. Not to force AI into every subject and topic, but to show how AI can and cannot help us students. We will certainly still need to be able to do trigonometry in the future, but we probably won’t be able to write a detailed text to accompany a technical data sheet. And if AI can help us to understand trigonometry, all the better.