I wrote my last post when I was about a third of the way through Unit 1 in CS101, and before the homework for Unit 1 was posted (which happened last Thursday morning). Now that I’ve finished Unit 1, both content and homework, I thought it was a good time to take stock & post again.
First, I have an issue. The instructor, David Evans, made his videos using some technology I’m not familiar with, but it’s clearly some kind of smartboard- or tablet-and-smartpen combination. The way it appears in the video is like this: he writes on a white surface using this groovy pen, and what he writes shows up in various colors, just like on a smartboard. The weird part is that what he writes seems to be floating above the pen and his hand, rather than appearing below it. That is, not above in the Y axis on the plane of the screen, but hovering above on the Z axis, so that the writing appears to be closer to the viewer’s eye than the pen. It’s kind of distracting. Actually I’ve more or less gotten used to it, and I can tune it out now, but it was very distracting for the first few videos.
This of course is Rule #1 for the use of technology in teaching: Never let the technology overshadow the actual pedagogical purpose. And I wouldn’t go so far as to say that this tech overshadows the content. But it is distracting.
And this raises a question for me, for the videos I make for my courses: handwriting or Powerpoint? Evans has so far made all the videos for CS101 using his own handwriting. (Except the bits in the Python interpreter, of course.) The videos that I’ve seen from Sebastian Thrun’s AI course last semester were all done with handwriting. Khan Academy videos are all Sal Khan’s handwriting. (At least I assume it’s his handwriting.) Obviously there’s a trend here: making educational videos using handwriting. And I can see the advantage of that: it makes it feel personal, like the instructor is writing just for you, like you went to David Evans’ office hours and he’s jotting on his whiteboard while you talk. I get that.
But here’s my problem: my handwriting sucks, and these videos are edited heavily. Why edited? To fast-forward in time, so you don’t have to watch Evans form every single letter. Which brings me to Powerpoint. I made a few videos for my Digital Library course using Powerpoint: I wrote a script and created a slide deck to accompany it, more or less in tandem (the script usually slightly preceded the slides), then I used Powerpoint’s Record Slide Show feature to record the timing of the slides and my narration, and exported that YouTube. It was super-easy. Powerpoint is definitely the low-bar way of creating videos. And for getting time-consuming stuff done, I tend to prefer low-bar. Good enough is good enough. But I do fear that Calibri is a poor alternative to handwriting. Does using a font make the video feel less personal? Or can the voiceover compensate for a lack of handwriting? If anyone is reading this, I’d welcome your feedback on this weighty issue.
My second issue is that some of the videos feel slightly pedantic, especially on the quiz reviews. Evans proceeds through each quiz option in somewhat excruciating detail. Which is, of course, better than the opposite. And I understand why he does it this way: these materials are being prepared for 70,000 (or more) students. As in any course, the instructor has to make things as clear as possible, which often means going slower than more advanced students would prefer. I figure I’ll probably mind this less as the content gets more advanced, and I stop being an advanced student. And, I have to think, this is probably what half the students in my courses feel like a good bit of the time. I need to be careful of that in my classroom teaching. But for making videos, I think Evans makes the right call: better to err on the side of being slightly pedantic than to lose your audience within the span of a 3 minute video.
That’s one gripe and one sort-of gripe. Now the good stuff: I’m finding the course so compelling that I want to work on it all the time. I had a hard time this week stopping once I’d started. I even found myself wanting to work on the course during the day while I was at work, and the one time I actually succumbed to that temptation, it took a student walking into my office to make me veer off.
The downside of the course being so very compelling is that I found myself rushing through it: I would watch video after video, and spend some significant time working on the quizzes and Python exercises. On the one hand, this is good, because, well, education should be compelling. But on the other hand, by Wednesday I found myself concerned that I’d finish Unit 1 too quickly and lose the thread before Unit 2 was posted. In the end, that didn’t happen, as I should have known it would not: the homework was posted on Thursday, and that took me several hours, plus the simple fact of having a real job and a family slowed me down sufficiently.
But this did make me realize that this is no different than “traditional” classroom-based courses, where there are sometimes several days between class sessions. I teach a Monday/Wednesday course this semester, so my students have a 4 day gap between class sessions. Hopefully they’re doing work for the course during those 4 days, but I don’t really have a way to force that to happen. In an asynchronous course like CS101, there’s no way to force it either. I’m a big believer in project-based courses (both of the courses I’m teaching this semester are based around semester-long projects), so I have to assume that my students (most of them, anyway) have their head in the game during non-class days, otherwise they’d never get their project deliverables finished by the due dates. But CS101 has made made me appreciate the value of homework and other small self-assessments, which I tend not to use much in my courses. Something to reconsider for next semester.
And on the subject of self-assessments… the quizzes and Python exercises. These are automatically evaluated. It’s not clear what this looks like on the back end, though I imagine they’re fairly simple algorithms. It seems like it would be quite easy to automate evaluation of a multiple choice quiz. As for the Python, if the value of such-and-such variable (and Evans tells us what to name the important variable) equals the correct value, then the exercise is evaluated as correct. It’s not clear to me, at this stage, if the code that gets to that point is evaluated.
But my point is, it’s difficult to imagine assignments in Information Science that could be automatically evaluated. I know of some instructors in this field who use multiple choice exams in their courses, though I’m not one of them, and in fact I have a hard time even imagining what a good multiple choice question would look like in the courses I teach. Though maybe that’s a failure of imagination on my part. I can think of one or two assignments that I could use in my courses that could be automatically evaluated, and in fact I plan to set up one such assignment for the next time I teach my Digital Libraries course. (Assignment: Set up an OAI-compliant metadata repository. I’d have to create a harvester. If the harvester successfully harvests the student’s metadata records, then the student has successfully completed the assignment.) But the point is, I can only think of one or two assignments that could be automatically evaluated that I can use in my courses. I’ll probably think of more as time goes on. But I have a hard time imagining that I’ll ever be able to come up with enough such assignments to cover a whole course.
A lack of assignments that can be automatically evaluated means that my courses (any courses in ILS? any courses in the social sciences?) cannot scale to 70,000 or 160,000 students, or whatever. Without the ability to automate evaluation of all, and I mean all assignments, that scalability is just impossible. Because no automation = a human grading 70,000 assignments. And by “a human” I mean “me.” Now, I don’t expect that 70,000 students are suddenly going to rush out and take my Digital Libraries course online. (I should be so lucky.) But if I make my videos available, & I make whatever assessments I create available, there’s no reason why a student not enrolled in SILS shouldn’t be able to use them. And am I going to evaluate that student’s performance? No I am not. And so I feel that my teaching — and maybe the entire field of ILS, maybe the social sciences generally — hits a wall fairly quickly, in terms of the scalability of online courses. Some courses can probably be automated better than others. But fundamentally, there will probably always be some courses for which evaluation cannot be fully automated (or automated at all?). And so I feel a bit stuck. Again, maybe this is a failure of imagination on my part. If anyone is reading this, I’d like to hear your thoughts on this.