Learning Technology jottings at Goldsmiths

Thoughts and deeds

Archive for the ‘assessment’ Category

Abstract for eLearning 2.0 Conference, Brunel University

leave a comment »

My keynote abstract for the eLearning 2.0 Conference, Brunel University, 6th-7th July 2011

Higher education is about to lurch into liberalisation. Institutions are now required to ask “What is my unique selling point?” In arts, humanities and social sciences learning, there is particular emphasis on ideas and communication, and often trenchant opposition to acquisitive or behavioural models of education. This presentation will compare established norms of higher learning with some nascent, reincarnated or ‘Big Society’ alternatives, including massive open online courses (MOOCs), online tuition-free universities, and those which elevate learning above accreditation. It will consider some principles of university learning and teaching, including original thinking, critical thinking, creative friction, commitment to a community of inquiry, the concept of scaffolding, and focus particularly on the constraints of assessment and accreditation. Returning university teachers to the centre of the institution, it will ask what university teachers contribute to learning that nobody else can and, with focus on wikis and blogs, under what circumstances might teachers use these technologies to support this vision of learning.

I’m looking forward to the conference – hope I can live up to the luminary presentations of previous years. Looking forward to catching up with some people I haven’t seen for ages.

Written by Mira Vogel

June 30, 2011 at 16:59

Posted in assessment, cck11, event

The infidelity of rubrics for assessing online discussions

leave a comment »

Brief summary of a very interesting paper:

Elliott, B (2010). A review of rubrics for assessing online discussions. International Computer Assisted Assessment Conference. 20th & 21st July 2010. University of Southampton. [17 pages, PDF format]

  • Educational benefits of asynchronous online discussion include: integration; elaboration; communication of outside experiences and material; self-reflection; experience of technologies; time management.
  • Drawbacks include: disembodiment and absence of cues; volume of messages; different demands from face-to-face, and so differently inhibiting.
  • Are assessment processes up to the job? Sadler’s idea of ‘fidelity’ e.g. effort as an ‘input variable’ which therefore  shouldn’t fall within the definition of academic achievement
  • Online collaborative work environments have new affordances. Worries about the ongoing relevance of the assessment process.
  • The study: literature review yielded 20 rubrics; these were examined with respect to type of rubric, scoring, and type of criteria used within the rubric; little consistency of terminology or expression – 128 separate rubric criteria were identified, reduced to 33, and 10 categories
  • The most commonly occurring criteria related to:
    1. participation
    2. academic discourse
    3. etiquette
    4. learning objectives
    5. critical thinking.
  • Fidelity? More than half of the rubrics made no reference to the learning objectives. Two of the above criteria relate to non-academic competencies, such as participation (the most common category)
  • Conclusions (caution due to small sample): the majority of rubrics for assessing online discussion exhibit low fidelity; none took account of students’ final level of understanding; none took account of the unique potential of the online environment.
  • Recommendations:
    Express rubric as criteria
    Include holistic assessment of learners final level of understanding or competency
    Make criteria valid measures of course objectives
    Accordingly, criteria should not reward effort or participation
    Clear
    Consistent
    Free of bias
    Recognise unique affordances of online writing

A few observations – we have an Employability Strategy now, and  with it a growing recognition of the kinds of ‘non-technical skills’ (a.k.a. ‘life skills’ or ‘soft skills’) which Sadler has contentiously called “non-achievements” in academic terms. That said, even if these were enshrined in the learning objectives, the balance in the rubrics identified by Elliott would still be awry in favour of these ‘soft skills’ and at the expense of credit for meeting academic learning objectives.

This is another part of the ongoing double standard (laden phrase, but intended in the most neutral sense) with which society in general tends to approach online environments. We need to talk about this.

Written by Mira Vogel

January 11, 2011 at 14:40

Posted in assessment

JISC guide on assessment in a digital age

leave a comment »

The ‘Effective Assessment in a Digital Age‘ guide is a product of JISC’s E-Learning Programme.

Received by email:

“Most of us have had formal or informal feedback throughout our lives. The way in
which we have been assessed very likely has had a fundamental effect on our learning and career progression. Assessment is one of the most important parts of learning and teaching and whether institutions get this right or wrong has a huge impact on students’ lives and careers.

JISC’s new guide, Effective Assessment in a Digital Age, demonstrates how
technology can significantly improve the experience of assessment and feedback. As
many higher education institutions are reviewing their assessment strategies, JISC
is looking at the transformative effects of technology that increase learner
autonomy, enhances the quality of the assessment experience and improves teaching
efficiency.

“Why do we still insist that students, who mostly use technologies such as
laptops and mobile phones when researching their assignments, sit down with pen and paper and write long essays when they are assessed?” asks Ros Smith, the author
of the guide. “This one size fits all view of assessment still dominates.
Perhaps instead we should be thinking much more creatively and be inspired by what technology can do. There are huge benefits to be gained, for example, in giving students choice over assignment formats, allowing them either to write a 5000 word essay on a topic or to put together a video or audio piece that explores different points of view. Students disadvantaged by traditional written assessments will clearly benefit from this approach but everyone gains if the use of different media prompts deeper thought around the topic.”

In addition, educational researchers since the 1990s have increasingly argued that
assessment should be used to support learning rather than just test and certify
achievement. This has shifted the emphasis from the teacher to the learner, as David
Nicol, Professor of Higher Education at the University of Strathclyde, explains:
“We tend to think of feedback as something a teacher provides, but if students are
to become independent lifelong learners, they have to become better at judging their
own work. If you really want to improve learning, get students to give one another
feedback. Giving feedback is cognitively more demanding than receiving feedback.
That way, you can accelerate learning.”

Technology provides ways of enabling students to monitor the standards of their own
work. The technology can be designed for the purpose (such as on-screen assessment
delivery systems or originality checking software) or adopted from a pool of widely
available generic and often open source software and familiar hardware (such as
digital cameras or handheld devices). Sarah Davies, JISC e-Learning Programme
Manager, says: “Technologies such as voting systems, online discussion forums,
wikis and blogs allow practitioners to monitor levels of understanding and thus make
better use of face-to-face contact time. Delivery of feedback through digital audio
and video, or screen-capture software, may also save
time and improve learners’ engagement with feedback.”

Effective Assessment in a Digital Age outlines some of the key benefits:
• better dialogue and communication that can overcome distance and time constraints
• immediate and learner-led assessment through interactive online tests and tools
in the hand (such as voting devices and internet connected mobile phones)
• authenticity through online simulations and video technologies and risk-free
rehearsal of real-world skills in professional and vocational education
• fast and easy processing and transferring of data
• improved thinking and ownership through peer assessment, collection of evidence
and reflection on achievements in e-portfolios;
• making visible skills and learning processes that were previously difficult to
measure; and
• a personal quality to feedback, even in large-group contexts.

Links
For accessible Word and PDF versions of Effective Assessment in a Digital Age and
full versions of the publication’s case studies visit: http://www.jisc.ac.uk/digiassess
For details of online resources associated with this publication, visit:
www.jisc.ac.uk/assessresource
For information about the JISC e-Learning programme, visit:
http://www.jisc.ac.uk/elearningprogramme”

Written by Mira Vogel

September 6, 2010 at 18:22

Posted in assessment

Tagged with ,

Workshop: enriching feedback with audio and graphical media

leave a comment »

The hands-on Goldsmiths Learning Enhancement Unit workshop Enriching Feedback with Audio and Graphical Media took place last week. A detailed referenced handout, plus examples and supporting materials, can be found on learn.gold.

From one participant:

“I found the workshop very comprehensive. It touched all aspects of feedback and how to deliver it. The overview of new tools to deliver feedback was eye opening. I didn’t know some even existed. Overall, I was very pleased I attended. I found the workshop very organised and well-presented.”

Would you like us to organise a repeat? If so, let us know.

Written by Mira Vogel

March 3, 2010 at 14:57

Posted in assessment, event, GLEU

Demonstrating our PRS system at a departmental Away-Day

with 2 comments

In 15 minutes I had, I thought I could show JISC’s 5 minute case-study video from Strathclyde, meanwhile distribute some unallocated clickers to the 40 participants and then move to a series of survey questions about:

  • Engagement of students in lectures
  • What most interested them about the PRS
  • Plus an MCQ about what GLEU stands for (to demonstrate correct answer).

I wouldn’t show them a grid of responses, or allocate identities to the clickers on a roster, but I would show a barchart of stats after each question.

The questions (approximately):

  1. Do you feel that students are engaged during your lectures?
    • Always
    • Usually
    • Sometimes
    • Rarely
    • I’m not prepared to answer that question!
  2. What aspect of PRS most interests you?
    • Diagnostic formative assessment at intervals during lectures to check understanding.
    • To keep students critically engaged throughout lectures
    • Opinion polls
    • As a stimulus for group discussion
    • Another aspect – ask me
  3. What does ‘GLEU’ stand for?
    • A number of options
    • Goldsmiths Learning Enhancement Unit [CORRECT]

I set up early – other presenters were going to switch between my laptop and another presenter’s mac – went away and came back.

A number of issues came up, outline below with some possible resolutions:

  1. The first survey question the barchart was empty. The  IR receptor was no longer responding to port check – it wasn’t communicating the clicks to the software. Since I was the last session of a long morning, there was little float time. I had checked this in the morning but I think maybe the receiver was disconnected while I was away. This should have been OK – systems should be robust enough to cope with this kind of thing – but ours is not the newest. After a number of attempted remedies, I restarted. This worked. A few people were interested enough to come back from lunch and have a look. So, if you are sharing kit, do a port-check before starting the presentation. And if there’s a problem, try restarting first. And I need to check we have the most recent driver installed, so that disconnecting doesn’t flummox the whole thing.
  2. Some participants wanted the feedback that they had clicked, and what response they had chosen. I had decided not to show the grid because it obscures some of the slide. It is possible to arrange the slide so that the grid can be positioned alongside. Or it is possible to make the questions available (for reference) on a different screen or in a different way. I think confirmation of choice might only be important if the clickers were allocated to individual students, and the responses counted towards something. But AP did mention that he gets students in class to write down their responses, because otherwise with more complicated questions they often forget what they initially responded. There is feedback of how many people have responded, which can be seen in the top row of PRS controls
  3. JM has used clickers as a student at the University of Colorado, where they were a compulsory purchase and used in summative assessment which took place during lectures (5-10% of final mark), and to register attendance. This was very motivating, students did the reading and turned up for sessions.Collect some research evidence on effects on a) engagement b) pre-session reading c) attendance d) other uses.
  4. To issue each student with their own (loaned) registered clicker, or not? If the principle concern is to keep students engaged, then maybe it is enough to do things anonymously. However, it may be helpful for students to think about the correct answer, or the other options, in relation to what their answer was. If they are being used for assessment, then each clicker could be allocated by student number. This would preserve anonymity and allow a single gradebook to be presented to all. Is there a Moodle plugin to make Interwrite talk to the learn.gold gradebook?
  5. Distributing and collecting the clickers. The best scenario is that they are issued to students at the beginning of the year, perhaps via the library. But if they cannot be issued, or there aren’t enough to go round, then perhaps it could be workable to delegate handing them out and collecting them each session, or to get students to replace them in the case themselves. Some thought is needed to streamline this. It is impossible (and probably unnecessary?)  to store the clickers in number order. But it’s likely that a few will be lost each time unless there is some way to count them in and out.  So I think that distributing and collecting them each time will be a challenge.
  6. Battery changes. There need to be some spare clickers handy, and some spare batteries too, during sessions. If we are not longterm-issuing clickers to individual students, and if there’s to be a bulk battery change, I think that students / users can do this (with batteries we supply). It’s not possible to use rechargeables on an institutional scale, but we can recycle with BatteryBack. The batteries last a long time.
  7. The kit is heavy and bulky. Departmental laptops can have the software, and it can be installed on teaching pool room machines. Loaning the clickers out longterm to individual students is the most convenient option but otherwise we can make them available in bulk in a carry-case and allow some to be kept in a department along with the IR receiver(s).

I think it would be ideal if a tutor for a given course piloted PRS, and let us know what the opportunities and issues are. We would offer solid support for a pilot like this.

Written by Mira Vogel

June 18, 2009 at 14:59

Posted in assessment, PRS, psychology

Giving feedback to students by audio and screen capture

leave a comment »

As frequently mentioned on this blog, students across the sector perceive grave shortcomings when it comes to feedback. Alongside this, there is near-total consensus that assessments and assignments should always be formative even where they are summative. So, any new intervention which could improve the way feedback is given is worth consideration.

Yesterday at the London School of Economics’ Teaching Day, I was lucky enough to attend the session on ‘Talking to your students using audio feedback’, led by Steve Bond and Matt Lingard.

The abstract for the session:

Talking to your students using audio feedback

This seminar will present examples of use of audio feedback from universities around the UK. Participants will have the opportunity to discuss in small groups how they might use these techniques in their own teaching. It will also provide practical advice on how to get started with the use of audio.

This revived an idea brewing for a while, which was to try this at Goldsmiths.

Why audio feedback?

  • Sounds Good project based at Leeds Uni found that of 1,200 students, 90% preferred audio
  • It was the personal aspect which was most appreciated – the nuance and warmth of tone. The feedback was felt to be generally richer. This is very promising for larger cohorts where feelings of impersonality can prevail.
  • Also appreciated was the increase in feedback – it is possible for staff to give more in the same amount of time because you speak quicker than you write.

How?

  • It’s very straightforward to do this on a desktop or laptop. Sometimes there is a decent integrated microphone on your machine, or you can use an inexpensive one e.g. on an existing web conferencing headset. CELT can lend you a mic if you need one. The software is free.
  • Tutors say that it takes about 12 goes to hone and optimise the process, but after that it’s quick and easy
  • Because the humanity of the feedback is one of the things that’s valued, there is no need to script what you are going to say – some brief notes are sufficient and umms and ahs are not a problem
  • Because context is often important, Steve and Matt had hit on Jing, a free screen capture tool. This has the added benefit of the tutor being able to talk to the piece of work they have marked, and to use gestures or highlights as well as speaking. It is possible to scroll through the work on-screen and talk through bit by bit, in context.
  • The feedback is saved as a file and can be uploaded to each student’s private space in learn.gold’s Assignment tool.

Any caveats?

  • Of course
  • Not all students can hear – a few may need or prefer text feedback
  • If you are one of those tutors who is fortunate enough to have time to give ample written feedback, and to discuss this with students, then audio / screen capture feedback may well feel like a step back. It’s more relevant for tutors who can’t.
  • The feedback is not searchable or easily skimmable in the way that text feedback is. Depending on how long your feedback is, this may or may not be a problem. You could provide your outline plan to the student, and on it make a note of the timings of when you started talking about a given section.

Find out more

Worth investigating further? Contact Mira or John at celt@gold.ac.uk.

Written by Mira Vogel

June 10, 2009 at 11:21

Posted in assessment, audio

Formative e-assessment – 28 Apr 09 at the IoE

with 2 comments

Update (6 May 09) – the presentations are now available.

This event was held (in the Institute of Education’s Centre for Excellence in Work-Based Learning for Educational Professionals on 28th April 09) to disseminate and discuss the findings from the Formative E-Assessment project (FEASST) project.

FEASST was funded by JISC. The project elicited cases of real-world assessment practice from academics and abstracted these into patterns – a state somewhere between anecdote and grand theory which other tutors could take use in their own practice (it did not seek to make recommendations). The project report is available, and the presentations will soon be (hopefully linked from the project site).

I came to this event with feedback foremost in mind. The crux of designing formative assessment is designing for good feedback – feedback which is position and made in such a way that it can change learning for the better. Across the sector (and including Goldsmiths) the respondents to the National Student Survey tell us they want more and better feedback on the work they submit. From the Times Higher:

Graham Gibbs, visiting professor at Oxford Brookes University, says assessment is teachers’ main lever “to change the way students study and get them to put effort into the right things”.

Ever since the National Student Survey was launched in 2005, students have consistently given the lowest scores to the assessment and feedback they receive. The National Union of Students says this means there is some way to go before the sector does indeed “get it right”.

The National Union of Students continues to campaign in this area. Their principles of effective assessment begin (my emphases):

“1. Should be for learning, not simply of learning.”

Throughout the day, this view of assessment was confirmed.

On the meaning of ‘formative’

It’s very easy to set assignments and collect learners’ work, but much more of a challenge to do something with it.

Assessment for learning requires designs which afford moments of contingency and scope for modification in such a way that the gap between what the learner knows and what they need to know can be closed; which focus on each learner’s trajectory; and the output of which is in some way measurable and comparable. If there is no scope for feeding back into and modifying the learning experience, then the assessment can’t be said to be formative.

The project report and presentations provide some definitions – it was felt to be important to nail down what formative means, here. Black and Wiliam (2009) conceptualise formative assessment as five key strategies

  1. Engineering effective classroom discussion, questions, and learning tasks that elicit evidence of learning;
  2. Providing feedback that moves learners forward
  3. Clarifying and sharing learning intentions and criteria for success
  4. Activating students as owners of their learning
  5. Activating students as resources for one another

Roles – tutors, learners and peer learners

Diana Laurillard’s Conversational Framework, which underpinned the concept of formative assessment in the project, includes the roles of tutor, learner and peer learner.

We can think about feedback in terms of tutors’ pedagogical development, or in terms of peer learners’ metacognitive development, or in terms of an automated system.

What emerged strongly was the push towards peer learning and peer assessment, and towards automation. But to what extent can peers, who are themselves learners, fulfill the criteria outlined above by Black and Wiliam? The Soft-Scaffolding case study may provide insights.

What also emerged strongly was the importance of embodying sound disciplinary and pedagogical understanding so that feedback can be grounded in the difference between what a given student knows, what they are required to know, and facilitate a progression in this direction.  Giving good feedback is rarely intuitive. To put this another way, some studies have found that it can negatively affect learning. If too much, or too early, or too frequent feedback is given, students can grope towards the correct (or favoured) response but may not be able to improve on this performance in subsequent challenges. Poor feedback may fossilise error. Some formulations of feedback can discourage students.

Wiliams mentioned that in schools the speed of learning for the children with the best teachers is up to four times that of the worst teachers.

There was a question about capturing the tacit knowledge experts bring to bear on their marking and feedback. Wiliams mentioned ‘judgement policy capture’ (if I heard this correctly) which involves analysing experts in their marking of e.g. submitted work which is boring but whose grammar and syntax are perfect, and the other way round, and see how much difference this makes.

On the role of the ‘e’ in e-assessment

And what does the ‘e’ mean here – for Goldsmiths? The FEASST project identified the following attributes of the ‘e’:

  • speed – timeliness of response, allowing the next iteration of problem solving (particularly in objective assessments where feedback can be automated) to begin more quickly.
  • storage capacity – eg of feedback snippets, illustrations
  • processing – automation, adaptivity to individual learners, scalability

I’d also add:

  • contextualising feedback – one example of this is the ability, in most word processing software, to comment directly onto work in such a way that they appear in context rather than disjointedly at the end or in a separate document.
  • categorisation – related to storage, the ability of tutors to excerpt, highlight or ‘tag’ examples of phenomena (e.g. pitfalls, strong analysis) in a given piece of work, or across submissions in a given assignment
  • searchability – students can search comments as well as tutors searching for illustrations and examples

However, beyond a delivery mechanism, the potential of e-formative assessment is uncertain.

Assessment case studies and patterns incorporating ‘e’

FEASST collected cases of real-world assessment practice from academics and abstracted these into patterns – a state somewhere between anecdote and grand theory, intended for other tutors to use in their own practice.  A number of the presentations illustrated the power of the ‘e’. The ‘Try Once Refine Once’ pattern was based on the case of Spanish language learners translating 40 assessments.  Without the automation, tutors would have been burdened with marking 10,000 sentences during (if I heard correctly) a single term. The Feedback On Feedback pattern was based on the the Open University’s Open Mentor – a system which captured tutor feedback and graphically represented it as different types (positive, negative, questions, answers). The Narrative Spaces pattern was intended to promote the exploration of mathematical ideas in (seemingly antithetical) narrative form, and it was straightforward that online environments afforded this narrative construction in different media. The Audiofiles case was a project which had investigated the effects of providing feedback in the form of audiofiles returned to students via the VLE and found some effects – including that comments were richer, longer, more personalised and more emphatic (on audio feedback see also this study).

So, in some case the ‘e’ enabled scaling up of assessment enabling valuable opportunities for practice with high quality feedback which minimised the risk of learners simply fossilising errors through repetition. In other cases, it provided environments for externalising thinking in different media. And in other cases the focus was on the tutor’s pedagogical development, using sophisticated natural language recognition to analyse the tutors’ feedback comments and represent them in categories  along with suggestions about how to shift from less to more effective types.

To summarise the ‘e’ has far more to offer than right, wrong and grade.

On feedback

Below are ideas and findings from several of the presentations. The project literature review (presented by Daly) was particularly rich in feedback findings. Of particular relevance to non-objective, qualitative feedback is the work of Shute (2008).

In his presentation, Wiliams made mention to a literature review on feedback which had identified c. 5,000 research studies. 131 of these studies studied the effects of feedback. In  around 50 of these cases, feedback made learners perform worse. So a good question to ask about feedback is what kind of responses in a learner does feedback trigger? Feedback needs to anatomise quality; needs to focus not on the individual but on the task.

It’s often the case that feedback comes too late to be relevant – particularly to those learners who are focussed on their summative assessments (the ones “that count”). Feedback needs to be a medical rather than a post-mortem – looking through the windscreen rather than through the rear-view mirror.

Some theory about feedback from Wiliams’ presentation. Grades – even high grades – don’t communicate what is good about a piece of work so that this can be repeated later, nor what could be improved. Receiving a grade alone exascerbates the normative effects to do with self-image in the proportion of learners who regard ability as fixed – such learners experience an impulse to compare their own grade with that of their peers, and to evaluate themselves comparatively in this way.

The key, then, is to promote a view of learning as incremental and to do so by providing feedback which fosters a cognitive engagement in learning – it causes thinking. This leads to activation of attention along the growth pathway rather than the well-being pathway – it promotes mastery orientation rather than performance orientation. Read Boekaerts on self-regulated learning the difference between the growth pathway and well-being pathway in learning.

As outlined in the project report, good feedback needs to:

  • alert learners to areas of weakness
  • diagnose the causes and dynamics of the weaknesses
  • make suggestions about opportunities to improve learning
  • address socio-emotive factors in communicating the above

More

Looking back at this post, I’ve omitted some of the day – most notably Diana Laurillard’s mapping of the patterns into her extended Conversational Framework. This presentation is definitely worth following up.

Some e-assessment links from the FEASST project:

References

Black and Wiliam (2009) Developing the theory of formative assessment. Educational assessment, evaluation and accountability;21(1)5-31.

Boekaerts, Zeidner and Pintrich (2000) Handbook of self-regulation – research, theory and application. Elsevier.

Shute (2008) Focus on formative feedback. Review of Educational Research; 78: 153-189.

Written by Mira Vogel

April 30, 2009 at 17:20