01 August 2010

Dependability and reliability: are they important?

This post originally started out as a slightly bad-tempered comment on the lack of reliability in the Learning Management System (LMS) used by my current employer. The detail of the current problem is indicative of the larger issue.

In my first draft, I wrote: "Here's the thing about educational technologies. They must be robust and reliable."

Is this true, though? Some parts of the system have to be utterly reliable – and making sure that enrolled students have dependable access to the online environment is one aspect that must be taken seriously. There are some other tools that need to be equally trustworthy. My list, based on current online teaching practice in this university, would include the tools that manage content (documents, video clips, audio clips, and images, for example), the discussion tool, the gradebook, and the assignment submission tool. Other tools are less-widely used, or support activity that isn't essential to course completion, or are being used experimentally. It may be acceptable to provide less reliable tools that fall into these categories, although I'm not convinced that this is true.

If an academic is toying with curriculum innovation, is it good enough for him (or her) to be using a technically unpredictable tool for that innovation? I argue that when the institution provides an inadequate tool, it puts curriculum innovation is at risk. If I'm using a faulty tool to implement a teaching innovation which doesn't work, do I blame the design of the innovation or the tool I used? What if I can't tell which is to blame? The difficulty is that the development of reliable online teaching tools is expensive, and too often, we don't get past the "proof-of-concept" stage.

In order to move curriculum innovation beyond experimentation, the institution must provide the right instruments – highly robust and utterly reliable applications. If we are to encourage curriculum innovation, we have to place before our teaching academics an array of tried and tested tools that gives them options for variety in the way they design their teaching innovations. Exploration of new tools should not need to include any time working out how to ensure that they function properly. If I want someone to find the joy in working with wood, providing them with a faulty hammer or a blunt chisel isn't a good idea. If I want them to be creative and construct amazing wooden things, I need to make sure they have the best tools available.

The range of tools provided should, ideally, exceed the range used by any one academic and certainly surpass the assortment used by the majority. All of them must work properly, almost all of the time.

Imagine my frustration, then, when even basic system functionality is flawed. The university I work at currently has chosen Moodle as the LMS (a replacement for WebCT, which was turned off a couple of months ago). In moving to Moodle, those maintaining the backend, including the data feed from the Student Records System (SRS), have discovered that they need to rebuild the mechanisms that populate the class lists. With me so far? Describing in detail the systems that ensure that currently enrolled students have access to the relevant Moodle sites is beyond my technical knowledge – and interest, to be frank.

I know that there is a difference between a true integration between the SRS and LMS and the LDAP-data transfer currently in place, but that's about the extent of my knowledge – and well beyond my sphere of interest. However, it is clear even to me that students are dropping into some kind of chasm between the SRS and the LMS. Who are these students? How many are there? How do we retrieve them?

Most importantly, how many staff and students will decide not to devote time to online teaching and learning because their initial experience is of faulty or badly implemented technology? Once they move away from online learning, how do we get them back? What needs to be in place to make sure we don't lose them in the first place?

On the other hand, there are the real risk takers – those who are happy to experiment with newly developed and not fully formed tools. This group would be frustrated if they were restricted only to the tried and tested. It seems, then, that we need an online learning system that provides for three clearly labelled sets of tools.

1. the essentials: The tools in this category will depend on they way the institution uses on the online learning environment, e.g. to deliver distance courses or support a blended model of teaching. Functionality for the tools in this category must be the most reliable, with a very, very low fail rate.

2. the exploratory: Reliability for this set of tools is slightly less important than for those in the first category, but is still pretty high.

3. the experimental: Control of this set of tools should sit with the technology innovators and risk-takers on staff, and students required to use them should be warned to expect problems.


That way, the people who sit in the bulge – the pragmatists and the conservatives – don't have to spend too much time thinking about the technology and are able to focus on ways to use it. The enthusiasts can play with the experimental, and the visionaries can show us the way forward with the exploratory – and we all know what to expect from the system and the tools in play.

Innovation is risky. An institution that manages innovation well will also be managing expectations and perceptions – and putting enough money into ICT to ensure that at least the essentials are utterly reliable.

Once that happens, I, for one, will be slightly less grumpy.

No comments: