Synote, video and distance learning

I’ve been a bit quiet on this blog of late, partly because of devoting my time to two very interested but concurrent MOOCs. Both of them from University of Southampton and FutureLearn, they started in the same week. One, Shipwrecks and Submerged Worlds: Maritime Archaeology was only four weeks long, though, so having completed it, and this week’s work on Web Science: How the Web is Changing the World, I have a little more time to catch up with the blog.

Of course one of the ways in which the web is changing the world, is the provision of this sort of education. And for the duration of these courses I’m always getting distracted by the learning experience itself. Lats time, it was participation on the forums that sparked my interest. This time its video. The videos on FutureLearn seem short, three, four, or at the most, seven minutes long. Contrast this with the ones on the Coursera course I did on statistics: they were 20 to 30 minutes long. Looking at the guidance FutureLearn offers for partners creating course content, the recommendation is no more than ten minutes.

I’d prefer something longer. To be honest, what I really wanted was an audio only podcast, to listen to on as I drive for work. My gold standard is In Our Time, the discussion programme hosted by Melvyn Bragg on BBC Radio Four. But that’s by the by, the video content on FutureLearn seems to be the briefest of introductions to concepts, to shallowest of discussions, not a developing and involving narrative (though I don’t recall thinking that with the Portus MOOC, which is interesting).

I guess one of the reasons why they keep the videos short is that they want to enable people quickly discussing the subject on the forum. It would be difficult to retain an interesting thought you had during the video, if you have to wait 20 minutes for the video to end. Then there’s the short quizzes, which give participants an opportunity to reflect on what they’ve learned. Coursera had a system where they could include these in the video itself. Indeed, if I recall correctly, you couldn’t continue with the video until you’d had a go at the quiz. FutureLearn treats the quizzes as separate elements, normally towards the end of the week, and only occasionally during the week’s content but always on a separate page. The Coursera system, in a crude way, lets you interact with the video. FutureLearn treats the video as a discreet element.

Don’t get me wrong, I’m not saying I’d prefer longer videos to the text articles that FutureLearn offers. I’m just as happy to learn by reading as by watching. Its just that I feel the short video format doesn’t use the medium to its full potential. Video has a great ability to compress or expand time, overlay the real with the imaginary, and explore distance, but those abilities need room to breath.

Last week I was invited to have a look at a technology that might reconcile my desire for longer videos with the the didactic need to discuss what we’re watching. Synote is an application developed by Mike Wald at Southampton University to make “multimedia resources such as video and audio easier to access, search, manage, and exploit. Learners, teachers and other users can create notes, bookmarks, tags, links, images and text captions synchronised to any part of a recording, such as a lecture.”

Mike and PhD student Yunjia Li showed us a new version of the application, currently in development, with a view to making it usable for MOOC learners as well as others. They showed us how easy it is to play a video through Synote and while its playing, make comments, that are timecoded to particular parts of the video, comments can even to be attached to particular areas of the screen. Comments can link to other web-based resources, anything with a URI in fact. And as every comment has a URI of its own, you can link from one section of the video to another section of a related video, effectively making your own “mash-up” (although with buffering it won’t be quite as slick as something edited together).

Adam, a colleague from the University’s Winchester School of Art was also (virtually) at the meeting, and soon set up a group of his students to help design a better user interface. You can read about their exciting and efficient workshop here.

So as I’ve worked though this week’s content for the Webscience MOOC, I’ve been thinking about Synote and how it might be used. To be honest the main course content videos seem too short to reward the effort of running them through a different web viewer just to be able to tag your comments to a particular place in the video. And reading the comments, just one commenter (at the time of writing) seems to have felt any need to refer to a particular point in the video. It seems the brevity of the videos might actually contribute to the generalness of the comments.

However, the MOOC has sent us off to the TED website to look a couple of longer videos there. Often the “See also” links at the end of an article point to videos. And these videos are often longer (the TED ones run just under fifteen minutes), and on these videos I think it would be good from a learning point of view, to be able to tag comments to particular sections of the video. For example, a couple of commenters included links to videos that weren’t part of the “see also” course related material. They might have preferred to have the ability to point their fellow students to the particularly relevant section of each video.  One such video was a TED talk by Daniel Dennett, always a favourite of mine. He quoted a lovely reference about five minutes 40 seconds in, about how “‘real magic’ doesn’t exist. Conjouring, the magic that does exist is not ‘real magic’”. Now I’d like to point you, dear reader to that moment, but I’ve taken two lines of text linking you the the video and telling you where to find the bit that I thought was particularly funny. It would have been so much easier if I’d been using Synote.

So, imagine a MOOC assignment that said “watch these through Synote and share/mash up the bits that are most relevant to what we’ve been discussing”. Imagine participants, setting up a Synote playlist of all the most relevant bits of TED talks to the subject they are discussing. Imagine in the Daniel Dennett talk above where he asks the audience to spot changes in a series of short videos, participants actually being about to mark exactly where on screen and in which frame they first noticed the change.

All of these are things that Synote is capable of.