A friend of mine, a professor at Concordia Theological Seminary in Fort Wayne, was musing about what artificial intelligence, such as ChatGPT, could do to teaching.
What he really was asking is what it would do to traditional grading of tests and papers.
How can he know if a student actually did the research and wrote the text for an assigned paper? At the root of this question is another: How can he know if his students, those who should be learning essential material, actually learned anything? Can a civil engineering professor pass a student in a course about bridge building (I assume those students take such a class) if AI is underlying the student’s academic work?
I may never feel safe driving over a bridge again.
My friend sees a possible answer in using the Oxford tutoring method in U.S. colleges. Students are given reading assignments and then discuss the material with their tutor-professor. It is designed to develop a deeper understanding of the subject matter than that accomplished through traditional methods.
Students, as we all learned in those days, knew how to game the academic system with the tried-and-true method of cramming the night before and then immediately forgetting everything once the test was over. Well, it seemed to work in my undergraduate days, but don’t ask me any questions about what I learned in those courses.
What I remember from those days is the late night “philosophical” sessions we preferred to doing serious study. It was the Oxford method without a tutor or any other adult in the room. Looking back at what stuck in my brain, I think I would have benefited from the Oxford method in my classes.
That can’t work here, can it? It certainly sounds more costly than our production line model for schools. Keep the conveyor moving and let’s all hope the final product passes quality assurance inspection at the end. And it certainly requires more structure and discipline in a student’s academic life, something that won’t sit well in our “no homework” brave new world.
I’m being cynical and a tad unfair, but the manufacturing analogy may be closer to the mark than one cares to admit.
One possible negative to the Oxford approach is a de-emphasis on memorization, but that horse has left the public school barn. School curricula already have effectively removed memorization as a teaching method. Ask a third-grader to recite the multiplication tables. So what, you may ask, given that everyone carries around a miniature computer masquerading as a telephone?
Memorization hasn’t been abandoned everywhere, of course. The classical education movement still sees it as foundational to learning. This approach is built on three progressing levels of pedagogy, called the trivium after its medieval antecedent. It structures a student’s academic career around natural learning capabilities, including memorization in the formative years when that is still easy for the youngsters.
There is no doubt that this movement intends to radically reverse the direction of modern education theory and its attendant scorn of our Western cultural heritage.
Classical education has its place, but I am not convinced it is the panacea its evangelists contend. Is there another approach that can work, especially at the college level?
Here’s a thought: What if we combined the Oxford system with the Socratic method? I suspect that does in fact happen over there in Merry Olde England, but perhaps I am romanticizing. I watch too many BBC series.
Meanwhile, we have a more immediate and much more dangerous threat here and now. Where will artificial intelligence take us? And will we have no choice but to be dragged along?
I find it instructive that even some of today’s technology gurus are publicly expressing their concern about where this will go. Can we stop its progress even if we want to? Have we become as impotent as Victor Frankenstein in Mary Shelley’s novel? Will the creature, something that was created, become the master?
Frank Herbert, one of sci-fi’s most popular and brilliant authors, dealt with this in his series based on the fictional planet Dune. He set his futuristic universe in a time after what he called the Butlerian Jihad. This jihad was a successful crusade against computers, thinking machines and conscious robots. Mind you, this was written in 1965 long before any personal computing devices were anywhere to be seen except in the fertile minds of visionaries like Herbert.
Why did Herbert see artificial intelligence as a greater threat than nuclear weapons while writing in the midst of the Cold War? His ruling class maintained their “atomics,” but all agreed the AI machines were to be forever banned. Curious, is it not? Or maybe just prescient.
We already are living in George Orwell’s “1984.” Can Frank Herbert’s “Dune” be far behind?
Mark Franke, an adjunct scholar of the Indiana Policy Review and its book reviewer, is formerly an associate vice chancellor at Indiana University-Purdue University Fort Wayne. Send comments to [email protected]