Two articles in the online edition of The Australian's Higher Education supplement caught my attention today.
The first reports on comments from academics teaching in the business disciplines in Australian universities (along with others who have an opinion about TEQSA and the ALTC disciplines setting standards project). I won't repeat the content of the article – you can read it for yourself.
A number of issues are highlighted by the article: the challenges ahead for TEQSA, the tussle between RTOs and the higher education sector (particularly universities and especially the Group of Eight), the question of an agreed definition of quality and how this might be measured, and the seeming disconnect between the implementation of recommendations from the Bradley and Cutler Reports.
Universities don't come out of this one looking good. The RTOs and the VET sector seem to be very willing to demonstrate that they have carefully aligned curricula with assessment strategies that test how well students have achieved clearly articulated learning outcomes. Universities, and in particular the Group of Eight universities, seem to be inclined towards the view that no one has any right to ask them to demonstrate how they measure academic achievement. If TEQSA is funded appropriately, and universities are required to participate in its compliance checks, many academic staff will be running to catch up with their colleagues teaching in other parts of the sector. In some ways, that is a bit unfair. Australian university lecturers, by and large, don't know much about teaching. They know about giving lectures, about running tutorial-based discussion groups, and about assessing how well students know the knowledge of the discipline, but most of them have no theoretical foundation to their teaching practice. Not their fault – they were hired because they know lots about their disciplines.
Maybe we should start by making it compulsory for all universities to enrol their teaching academics in proper teacher education programs – except, of course, those who already have a teaching qualification. At least then they would have the knowledge necessary to engage in a conversation about curriculum design and teaching strategies other than those invented in medieval Europe. We would, of course, have to acknowledge that this would take time away from research and current teaching activities – so we'll need to hire more academics to help with workloads, especially if we want the country to maintain research outputs and we intend to send more Australians to university. (And before you say anything, let me tell you that it's no good suggesting that Australian academics work harder or longer hours ... most of the good ones already put in 50, 60, or 70 hour weeks.)
Oh, yes, I suppose we could create two categories of university academic – the research academic and the teaching only academic. This, in my view, is a dreadful idea. It's the research that gives depth to teaching in universities. Many of the very best of our teachers are outstanding researchers as well.
So it's all a bit tricky.
The second article reports on moves by professions to require international students to pass English language tests upon graduation. Here's the thing: most Australian universities will enrol international students for whom English is a second (third, fourth) language if they achieve 6.5 on the IELTS test – somewhere between competent and good. This score does not guarantee that students are fluent in the language. That puts lecturers in a difficult position. It would be wrong to penalise students for poor English language skills unless fluency is a designated learning outcome for their chosen program of study – and because language skills are rarely articulated among program learning outcomes, they don't. That means that students who learn the declarative (foundational knowledge; knowing what or knowing about stuff in the discipline) and functioning knowledge (skills; knowing how to apply declarative knowledge to complete discipline-specific tasks) required by their course, and have sufficiently good English language skills to convey their understanding to the lecturer, pass the course. That gives us graduates – in accounting or nursing, for example – who can't get jobs because the employers expect a higher standard of English language skills.
That's all a bit tricky too.
Maybe the new government will have the answers.
17 August 2010
01 August 2010
Dependability and reliability: are they important?
This post originally started out as a slightly bad-tempered comment on the lack of reliability in the Learning Management System (LMS) used by my current employer. The detail of the current problem is indicative of the larger issue.
In my first draft, I wrote: "Here's the thing about educational technologies. They must be robust and reliable."
Is this true, though? Some parts of the system have to be utterly reliable – and making sure that enrolled students have dependable access to the online environment is one aspect that must be taken seriously. There are some other tools that need to be equally trustworthy. My list, based on current online teaching practice in this university, would include the tools that manage content (documents, video clips, audio clips, and images, for example), the discussion tool, the gradebook, and the assignment submission tool. Other tools are less-widely used, or support activity that isn't essential to course completion, or are being used experimentally. It may be acceptable to provide less reliable tools that fall into these categories, although I'm not convinced that this is true.
If an academic is toying with curriculum innovation, is it good enough for him (or her) to be using a technically unpredictable tool for that innovation? I argue that when the institution provides an inadequate tool, it puts curriculum innovation is at risk. If I'm using a faulty tool to implement a teaching innovation which doesn't work, do I blame the design of the innovation or the tool I used? What if I can't tell which is to blame? The difficulty is that the development of reliable online teaching tools is expensive, and too often, we don't get past the "proof-of-concept" stage.
In order to move curriculum innovation beyond experimentation, the institution must provide the right instruments – highly robust and utterly reliable applications. If we are to encourage curriculum innovation, we have to place before our teaching academics an array of tried and tested tools that gives them options for variety in the way they design their teaching innovations. Exploration of new tools should not need to include any time working out how to ensure that they function properly. If I want someone to find the joy in working with wood, providing them with a faulty hammer or a blunt chisel isn't a good idea. If I want them to be creative and construct amazing wooden things, I need to make sure they have the best tools available.
The range of tools provided should, ideally, exceed the range used by any one academic and certainly surpass the assortment used by the majority. All of them must work properly, almost all of the time.
Imagine my frustration, then, when even basic system functionality is flawed. The university I work at currently has chosen Moodle as the LMS (a replacement for WebCT, which was turned off a couple of months ago). In moving to Moodle, those maintaining the backend, including the data feed from the Student Records System (SRS), have discovered that they need to rebuild the mechanisms that populate the class lists. With me so far? Describing in detail the systems that ensure that currently enrolled students have access to the relevant Moodle sites is beyond my technical knowledge – and interest, to be frank.
I know that there is a difference between a true integration between the SRS and LMS and the LDAP-data transfer currently in place, but that's about the extent of my knowledge – and well beyond my sphere of interest. However, it is clear even to me that students are dropping into some kind of chasm between the SRS and the LMS. Who are these students? How many are there? How do we retrieve them?
Most importantly, how many staff and students will decide not to devote time to online teaching and learning because their initial experience is of faulty or badly implemented technology? Once they move away from online learning, how do we get them back? What needs to be in place to make sure we don't lose them in the first place?
On the other hand, there are the real risk takers – those who are happy to experiment with newly developed and not fully formed tools. This group would be frustrated if they were restricted only to the tried and tested. It seems, then, that we need an online learning system that provides for three clearly labelled sets of tools.
That way, the people who sit in the bulge – the pragmatists and the conservatives – don't have to spend too much time thinking about the technology and are able to focus on ways to use it. The enthusiasts can play with the experimental, and the visionaries can show us the way forward with the exploratory – and we all know what to expect from the system and the tools in play.
Innovation is risky. An institution that manages innovation well will also be managing expectations and perceptions – and putting enough money into ICT to ensure that at least the essentials are utterly reliable.
Once that happens, I, for one, will be slightly less grumpy.
In my first draft, I wrote: "Here's the thing about educational technologies. They must be robust and reliable."
Is this true, though? Some parts of the system have to be utterly reliable – and making sure that enrolled students have dependable access to the online environment is one aspect that must be taken seriously. There are some other tools that need to be equally trustworthy. My list, based on current online teaching practice in this university, would include the tools that manage content (documents, video clips, audio clips, and images, for example), the discussion tool, the gradebook, and the assignment submission tool. Other tools are less-widely used, or support activity that isn't essential to course completion, or are being used experimentally. It may be acceptable to provide less reliable tools that fall into these categories, although I'm not convinced that this is true.
If an academic is toying with curriculum innovation, is it good enough for him (or her) to be using a technically unpredictable tool for that innovation? I argue that when the institution provides an inadequate tool, it puts curriculum innovation is at risk. If I'm using a faulty tool to implement a teaching innovation which doesn't work, do I blame the design of the innovation or the tool I used? What if I can't tell which is to blame? The difficulty is that the development of reliable online teaching tools is expensive, and too often, we don't get past the "proof-of-concept" stage.
In order to move curriculum innovation beyond experimentation, the institution must provide the right instruments – highly robust and utterly reliable applications. If we are to encourage curriculum innovation, we have to place before our teaching academics an array of tried and tested tools that gives them options for variety in the way they design their teaching innovations. Exploration of new tools should not need to include any time working out how to ensure that they function properly. If I want someone to find the joy in working with wood, providing them with a faulty hammer or a blunt chisel isn't a good idea. If I want them to be creative and construct amazing wooden things, I need to make sure they have the best tools available.
The range of tools provided should, ideally, exceed the range used by any one academic and certainly surpass the assortment used by the majority. All of them must work properly, almost all of the time.
Imagine my frustration, then, when even basic system functionality is flawed. The university I work at currently has chosen Moodle as the LMS (a replacement for WebCT, which was turned off a couple of months ago). In moving to Moodle, those maintaining the backend, including the data feed from the Student Records System (SRS), have discovered that they need to rebuild the mechanisms that populate the class lists. With me so far? Describing in detail the systems that ensure that currently enrolled students have access to the relevant Moodle sites is beyond my technical knowledge – and interest, to be frank.
I know that there is a difference between a true integration between the SRS and LMS and the LDAP-data transfer currently in place, but that's about the extent of my knowledge – and well beyond my sphere of interest. However, it is clear even to me that students are dropping into some kind of chasm between the SRS and the LMS. Who are these students? How many are there? How do we retrieve them?
Most importantly, how many staff and students will decide not to devote time to online teaching and learning because their initial experience is of faulty or badly implemented technology? Once they move away from online learning, how do we get them back? What needs to be in place to make sure we don't lose them in the first place?
On the other hand, there are the real risk takers – those who are happy to experiment with newly developed and not fully formed tools. This group would be frustrated if they were restricted only to the tried and tested. It seems, then, that we need an online learning system that provides for three clearly labelled sets of tools.
1. the essentials: The tools in this category will depend on they way the institution uses on the online learning environment, e.g. to deliver distance courses or support a blended model of teaching. Functionality for the tools in this category must be the most reliable, with a very, very low fail rate.
2. the exploratory: Reliability for this set of tools is slightly less important than for those in the first category, but is still pretty high.
3. the experimental: Control of this set of tools should sit with the technology innovators and risk-takers on staff, and students required to use them should be warned to expect problems.
That way, the people who sit in the bulge – the pragmatists and the conservatives – don't have to spend too much time thinking about the technology and are able to focus on ways to use it. The enthusiasts can play with the experimental, and the visionaries can show us the way forward with the exploratory – and we all know what to expect from the system and the tools in play.
Innovation is risky. An institution that manages innovation well will also be managing expectations and perceptions – and putting enough money into ICT to ensure that at least the essentials are utterly reliable.
Once that happens, I, for one, will be slightly less grumpy.
Subscribe to:
Posts (Atom)