Learning Technology jottings at Goldsmiths

Thoughts and deeds

Posts Tagged ‘formative assessment

Formative e-assessment – 28 Apr 09 at the IoE

with 2 comments

Update (6 May 09) – the presentations are now available.

This event was held (in the Institute of Education’s Centre for Excellence in Work-Based Learning for Educational Professionals on 28th April 09) to disseminate and discuss the findings from the Formative E-Assessment project (FEASST) project.

FEASST was funded by JISC. The project elicited cases of real-world assessment practice from academics and abstracted these into patterns – a state somewhere between anecdote and grand theory which other tutors could take use in their own practice (it did not seek to make recommendations). The project report is available, and the presentations will soon be (hopefully linked from the project site).

I came to this event with feedback foremost in mind. The crux of designing formative assessment is designing for good feedback – feedback which is position and made in such a way that it can change learning for the better. Across the sector (and including Goldsmiths) the respondents to the National Student Survey tell us they want more and better feedback on the work they submit. From the Times Higher:

Graham Gibbs, visiting professor at Oxford Brookes University, says assessment is teachers’ main lever “to change the way students study and get them to put effort into the right things”.

Ever since the National Student Survey was launched in 2005, students have consistently given the lowest scores to the assessment and feedback they receive. The National Union of Students says this means there is some way to go before the sector does indeed “get it right”.

The National Union of Students continues to campaign in this area. Their principles of effective assessment begin (my emphases):

“1. Should be for learning, not simply of learning.”

Throughout the day, this view of assessment was confirmed.

On the meaning of ‘formative’

It’s very easy to set assignments and collect learners’ work, but much more of a challenge to do something with it.

Assessment for learning requires designs which afford moments of contingency and scope for modification in such a way that the gap between what the learner knows and what they need to know can be closed; which focus on each learner’s trajectory; and the output of which is in some way measurable and comparable. If there is no scope for feeding back into and modifying the learning experience, then the assessment can’t be said to be formative.

The project report and presentations provide some definitions – it was felt to be important to nail down what formative means, here. Black and Wiliam (2009) conceptualise formative assessment as five key strategies

  1. Engineering effective classroom discussion, questions, and learning tasks that elicit evidence of learning;
  2. Providing feedback that moves learners forward
  3. Clarifying and sharing learning intentions and criteria for success
  4. Activating students as owners of their learning
  5. Activating students as resources for one another

Roles – tutors, learners and peer learners

Diana Laurillard’s Conversational Framework, which underpinned the concept of formative assessment in the project, includes the roles of tutor, learner and peer learner.

We can think about feedback in terms of tutors’ pedagogical development, or in terms of peer learners’ metacognitive development, or in terms of an automated system.

What emerged strongly was the push towards peer learning and peer assessment, and towards automation. But to what extent can peers, who are themselves learners, fulfill the criteria outlined above by Black and Wiliam? The Soft-Scaffolding case study may provide insights.

What also emerged strongly was the importance of embodying sound disciplinary and pedagogical understanding so that feedback can be grounded in the difference between what a given student knows, what they are required to know, and facilitate a progression in this direction.  Giving good feedback is rarely intuitive. To put this another way, some studies have found that it can negatively affect learning. If too much, or too early, or too frequent feedback is given, students can grope towards the correct (or favoured) response but may not be able to improve on this performance in subsequent challenges. Poor feedback may fossilise error. Some formulations of feedback can discourage students.

Wiliams mentioned that in schools the speed of learning for the children with the best teachers is up to four times that of the worst teachers.

There was a question about capturing the tacit knowledge experts bring to bear on their marking and feedback. Wiliams mentioned ‘judgement policy capture’ (if I heard this correctly) which involves analysing experts in their marking of e.g. submitted work which is boring but whose grammar and syntax are perfect, and the other way round, and see how much difference this makes.

On the role of the ‘e’ in e-assessment

And what does the ‘e’ mean here – for Goldsmiths? The FEASST project identified the following attributes of the ‘e’:

  • speed – timeliness of response, allowing the next iteration of problem solving (particularly in objective assessments where feedback can be automated) to begin more quickly.
  • storage capacity – eg of feedback snippets, illustrations
  • processing – automation, adaptivity to individual learners, scalability

I’d also add:

  • contextualising feedback – one example of this is the ability, in most word processing software, to comment directly onto work in such a way that they appear in context rather than disjointedly at the end or in a separate document.
  • categorisation – related to storage, the ability of tutors to excerpt, highlight or ‘tag’ examples of phenomena (e.g. pitfalls, strong analysis) in a given piece of work, or across submissions in a given assignment
  • searchability – students can search comments as well as tutors searching for illustrations and examples

However, beyond a delivery mechanism, the potential of e-formative assessment is uncertain.

Assessment case studies and patterns incorporating ‘e’

FEASST collected cases of real-world assessment practice from academics and abstracted these into patterns – a state somewhere between anecdote and grand theory, intended for other tutors to use in their own practice.  A number of the presentations illustrated the power of the ‘e’. The ‘Try Once Refine Once’ pattern was based on the case of Spanish language learners translating 40 assessments.  Without the automation, tutors would have been burdened with marking 10,000 sentences during (if I heard correctly) a single term. The Feedback On Feedback pattern was based on the the Open University’s Open Mentor – a system which captured tutor feedback and graphically represented it as different types (positive, negative, questions, answers). The Narrative Spaces pattern was intended to promote the exploration of mathematical ideas in (seemingly antithetical) narrative form, and it was straightforward that online environments afforded this narrative construction in different media. The Audiofiles case was a project which had investigated the effects of providing feedback in the form of audiofiles returned to students via the VLE and found some effects – including that comments were richer, longer, more personalised and more emphatic (on audio feedback see also this study).

So, in some case the ‘e’ enabled scaling up of assessment enabling valuable opportunities for practice with high quality feedback which minimised the risk of learners simply fossilising errors through repetition. In other cases, it provided environments for externalising thinking in different media. And in other cases the focus was on the tutor’s pedagogical development, using sophisticated natural language recognition to analyse the tutors’ feedback comments and represent them in categories  along with suggestions about how to shift from less to more effective types.

To summarise the ‘e’ has far more to offer than right, wrong and grade.

On feedback

Below are ideas and findings from several of the presentations. The project literature review (presented by Daly) was particularly rich in feedback findings. Of particular relevance to non-objective, qualitative feedback is the work of Shute (2008).

In his presentation, Wiliams made mention to a literature review on feedback which had identified c. 5,000 research studies. 131 of these studies studied the effects of feedback. In  around 50 of these cases, feedback made learners perform worse. So a good question to ask about feedback is what kind of responses in a learner does feedback trigger? Feedback needs to anatomise quality; needs to focus not on the individual but on the task.

It’s often the case that feedback comes too late to be relevant – particularly to those learners who are focussed on their summative assessments (the ones “that count”). Feedback needs to be a medical rather than a post-mortem – looking through the windscreen rather than through the rear-view mirror.

Some theory about feedback from Wiliams’ presentation. Grades – even high grades – don’t communicate what is good about a piece of work so that this can be repeated later, nor what could be improved. Receiving a grade alone exascerbates the normative effects to do with self-image in the proportion of learners who regard ability as fixed – such learners experience an impulse to compare their own grade with that of their peers, and to evaluate themselves comparatively in this way.

The key, then, is to promote a view of learning as incremental and to do so by providing feedback which fosters a cognitive engagement in learning – it causes thinking. This leads to activation of attention along the growth pathway rather than the well-being pathway – it promotes mastery orientation rather than performance orientation. Read Boekaerts on self-regulated learning the difference between the growth pathway and well-being pathway in learning.

As outlined in the project report, good feedback needs to:

  • alert learners to areas of weakness
  • diagnose the causes and dynamics of the weaknesses
  • make suggestions about opportunities to improve learning
  • address socio-emotive factors in communicating the above

More

Looking back at this post, I’ve omitted some of the day – most notably Diana Laurillard’s mapping of the patterns into her extended Conversational Framework. This presentation is definitely worth following up.

Some e-assessment links from the FEASST project:

References

Black and Wiliam (2009) Developing the theory of formative assessment. Educational assessment, evaluation and accountability;21(1)5-31.

Boekaerts, Zeidner and Pintrich (2000) Handbook of self-regulation – research, theory and application. Elsevier.

Shute (2008) Focus on formative feedback. Review of Educational Research; 78: 153-189.

Written by Mira Vogel

April 30, 2009 at 17:20