As participants filled to Prof. Felienne Hermans’ workshop on AI & Course Design, there was a unique directive: stow away laptops and mobile phones. Drawing from her vast experience, Felienne explained that in her teaching practice, screens often led to divided attention. Instead, she introduced the whimsical concept of the ‘participation panda’—a cue for group collaboration whenever it appeared on screen. And in her arsenal of attention-grabbing critters were the ‘homework hound’ and ‘exam eagle.’
With this engaging start, Felienne seamlessly navigated through the history of teaching machines and personalized learning, shedding light on the essence of large language models as language and code continuation engines. However, she didn't shy away from highlighting their limitations, particularly in creativity, and the profound implications for education.
Transitioning into practical advice, Felienne shared insights from her own course design endeavours, notably in Python programming—a domain ripe for AI influence. Drawing from anecdotes, she cautioned against overwhelming students with unreasonable expectations, advocating instead for scaffolded learning experiences tailored to diverse skill levels.
The discussion didn't shy away from confronting biases ingrained in both AI and education. Felienne provocatively questioned the prevalent notion of 'good language equals good thinking,' exposing the inherent biases in assessment criteria. Moreover, she underscored the peril of cultural biases and the latent racist implications of AI algorithms—a stark reminder of the ethical imperative in AI integration.
Throughout the workshop, a central theme emerged: the historical pitfalls of technology in education. Felienne urged educators to tread cautiously, emphasizing that while AI holds promise, it's not a panacea for pedagogical challenges. Instead, it requires a nuanced approach—one that prioritizes collaboration, creativity, and ethical reflection.