research

FORMATIVE ASSESSMENT

observation, in-depth interviews, surveys, focus groups, analysis, reports, and dialogue with participants, each of which can be part of formative evaluation.

 Here are a number of different technologies that can be used with to provide formative assessment and feedback opportunities. They should never be divorced from pedagogic concerns or 'V.A.T' - what is the Value Added by Technology? Otherwise we will simply mechanise bad practice and disengage the student. These are listed in the table below. Tutor whole group and individualised feedback on a specific task through a blog or forum Feedback on tasked online treasure hunts or short webquests across tutor-chosen resources Feedback on a Pebble Pad action plan or structured thought. Opportunities to review initial blog posts with current ones - moving away from a deficit model of what students do not know ||< Formative assessment using voting systems used in class (e.g. Turning Point). Students' responses can be anonymised for sensitive subjects Regular feedback on a structured writing frame template in a Pebble Pad webfolio or Word document's comment functions Tutor and group discussion with feedback on a multimedia blog using web 2.0 resources e.g. a YouTube clip, or a link to a contentious wikipedia entry ||< Work submitted and returned with individual feedback and annotations on scripts Peer-to-peer online formative feedback Feedback on student-authored audio podcasts or Learning Autobiographies using Windows Movie Maker Level 3 students returning to their Level 1 Pebble Pad webfolio structured writing templates and reflection on their 'learning journey' ||
 * ~ **Standard formative assessment and feedback** ||~ Moving beyond the standard formative assessment and feedback ||~ Higher Level formative assessment and feedback ||
 * < Multiple choice questions (MCQs) and 'objective' questions with feedback

 So formative assessment opportunities are an essential intervention in the learning process because they give:
 * learners the chance to assess and reassess themselves on what they know
 * teachers a chance to provide extra teaching, tasks and examples
 * learners an opportunity to apply their understanding
 * teachers a chance to focus questions and give students an understanding of what is important





Assessment Principles: Some possible candidates
Below are three sets of principles that might be used to guide the design of assessment in higher or further education. The first set, of which there are 11, has informed the work of the Reengineering Assessment Practices (REAP) project (**[|www.reap.ac.uk]**). The second set are a more comprehensive list developed at the University of Strathclyde by the Assessment Working Group who have been tasked with reformulating the policy and practice of assessment across the institution. The final set of 7, were proposed in the US by Chickering and Gamson (1991) based on their review of good undergraduate education. These principles are a starting point in trying to understand the relationship between the theory and practice of assessment.

Assessment design should:
> **"engage"** Adapted from **[|Nicol and Macfarlane-Dick (2006)]** and **[|Gibbs and Simpson (2004)]** These are principles of ‘//good assessment design for the development of learner self-regulation//’. The first seven are about using assessment tasks to develop learner independence or learner self-regulation ("empowerment"). The final four principles are about using assessment tasks to promote time on task and productive learning ("engagement"). Balancing the "engagement" and "empowerment" principles is important in the early years of study. A paper describing the application of these principles to two case studies can be found **[|here]**.
 * "empower"**
 * 1) Engage students actively in identifying or formulating criteria
 * 2) Facilitate opportunities for self-assessment and reflection
 * 3) Deliver feedback that helps students self-correct
 * 4) Provide opportunities for feedback dialogue (peer and tutor-student)
 * 5) Encourage positive motivational beliefs and self-esteem
 * 6) Provide opportunities to apply what is learned in new tasks
 * 7) Yield information that teachers can use to help shape teaching
 * 1) Capture sufficient study time and effort in and out of class
 * 2) Distribute students’ effort evenly across topics and weeks**.**
 * 3) Engage students in deep not just shallow learning activity
 * 4) Communicates clear and high expectations to students.

The REAP project: A wider set of principles
The twelve formative assessment principles in the table below were developed through the REAP project. They provide guidance for teachers interested in improving the quality of the learning experience of students in higher education. These principles are based on recent research on assessment (Yorke, 1987: Nicol & Macfarlane-Dick 2004, 2006, in press: Boud, 2000: Knight, 2002: Knight and Yorke, 2003), the QAA guidelines on assessment of student learning (2006) and published studies of University policies and practices that are associated with high levels of student success (Kuh, Kinzie, Schuh and Whitt, 2003: Tinto, 1991). Overall, this research suggests that independent and lifelong learning, and the academic and social dimensions of learning can be enhanced when formative assessment practices are designed using the ideas expressed in Table 1. For each principle, a //key question// is provided that teachers might use to think about, and review, formative assessment practices in their courses or programmes. **Table 1: Principles of good formative assessment and feedback.**

> //To what extent do students in your course have opportunities to engage actively with goals, criteria and standards, before, during and after an assessment task?// > //To what extent do your assessment tasks encourage regular study in and out of class and deep rather than surface learning?// > //What kind of teacher feedback do you provide – in what ways does it help students self-assess and self-correct?// > To what extent is feedback attended to and acted upon by students in your course, and if so, in what ways? > //To what extent are your summative and formative assessments aligned and support the development of valued qualities, skills and understanding.// > //What opportunities are there for feedback dialogue (peer and/or tutor-student) around assessment tasks in your course?// > To what extent are there formal opportunities for reflection, self-assessment or peer assessment in your course? > //To what extent do students have choice in the topics, methods, criteria, weighting and/or timing of learning and assessment tasks in your course?// > //To what extent are your students in your course kept informed or engaged in consultations regarding assessment decisions?// > //To what extent do your assessments and feedback processes help support the development of learning communities?// > //To what extent do your assessments and feedback processes activate your students’ motivation to learn and be successful?// > //To what extent do your assessments and feedback processes inform and shape your teaching?//
 * 1) **Help clarify what good performance is (goals, criteria, standards).**
 * 1) **Encourage ‘time and effort’ on challenging learning tasks.**
 * 1) **Deliver high quality feedback information that helps learners self-correct.**
 * 1) **Provide opportunities to act on feedback (to close any gap between current and desired performance)**
 * 1) **Ensure that summative assessment has a positive impact on learning?**
 * 1) **Encourage interaction and dialogue around learning (peer and teacher-student.**
 * 1) **Facilitate the development of self-assessment and reflection in learning.**
 * 1) **Give choice in the topic, method, criteria, weighting or timing of assessments.**
 * 1) **Involve students in decision-making about assessment policy and practice.**
 * 1) **Support the development of learning communities**
 * 1) **Encourage positive motivational beliefs and self-esteem.**
 * 1) **Provide information to teachers that can be used to help shape the teaching**

Good practice in undergraduate education:
Chickering and Gamson (1991)
 * 1) Encourages contacts between students and faculty,
 * 2) Develops reciprocity and cooperation among students,
 * 3) Uses active learning techniques
 * 4) Gives prompt feedback,
 * 5) Emphasizes time on task,
 * 6) Communicates high expectations
 * 7) Respects diverse talents and ways of learning.

Chickering and Gamson (1991), Applying the seven principles of good feedback practice in undergraduate education, San Francisco: Jossey-Bass. A useful document with examples of applications of these principles can be found at:
 * [|__http://www.csuhayward.edu/wasc/pdfs/End%20Note.pdf__]

 ** By ‘assessment’, we mean “the process of gathering information about students or program”. And by ‘assessment FOR learning’, we refer to formative assessment, which is “designed to provide direction for improvement and/or adjustment to a program for individual students or for a whole class”. The following material therefore, are for teachers who want to know what are the ways to gather information about students’ learning and how best to communicate such information to students so they improve and achieve success in learning. For instance, “For Americans, errors tend to be interpreted as an indication of failure in learning the lesson. For Chinese and Japanese, they are an index of what still needs to be learned.”(Stigler and Stevenson, American Educator, Spring 1991). Such different interpretations result in a variety of reactions to the display of errors, which in turn have implications on how teachers use errors as effective means of instruction.  Since my learners are all Asian, I find that they respond positively to error correction. The attitude they generally exhibit is on of quiet acceptance of their mistakes and a willingness to do better next time. And if given a chance to modify, add on or take out errors from assignments, my Japanese students would willingly do so 100% of the time. Based on observation, they also exhibit a better understanding of the material modified, added on or changed. Proof that in this case, such kind of feedback contributes to their learning, and it is not merely seen as a way to improve on grade. The use of rubrics to define tasks and to communicate learning outcomes to students, is something I have come to rely on heavily in the classroom over the years. From experience, I find it an objective means to assess students’ work. Along the course of a task or a project, the rubric becomes a tool to guide and direct a student’s progress. It gives a clear picture of what ‘success’ looks like. And if changes have to be made along the way, it specifies which area/s to work on, thus contributing to student confidence and motivation to work harder towards achievement of the task or project. Another positive aspect of using rubrics is as a means of communication of expectations between school and home. At the start of a project/task, I ask students to show the rubrics to their parents and have it signed by them. This way, accountability rests on all three: the teacher, the students and the parents. According to Covington (1992), “the process of engaging in self-assessment increases students’ commitment to achieving educational goals.” And as to how to do this, Rick Stiggins has the contention that they should be involved in all three processes in assessment, namely; a. in the construction of assessment and in the development of the criteria for success. b. in the keeping of records of their own growth and achievement through such strategies as portfolios. c. in communicating their achievement through such vehicles as student-involved parent conferences. In my Writing for College class, students were asked, in small groups, to come up with what “good writing” looks like. Their work was then put up in posters in the classroom, and constantly referred to while working on tasks, or during the design and development of a rubric. Stiggins (Student-Involved Classroom Assessment, 2001) contends, “those (students) who experience success gain the confidence needed to risk trying… (while those) who experience failure lose confidence in themselves, stop trying and… fail even more frequently. Confidence therefore, is the key to student success in all learning situations.” By motivation here, we mean intrinsic motivation – one that comes as a result of students gaining confidence in themselves by knowing what is expected of them, what evidence/s is/are required to show success and what kind of activities and/tasks will get them there. In order for all these to happen, the teacher has yet another responsibility. And that is… Just recently, a discussion from a former student shed more light as to what “quality” feedback means. She could not have emphasized its value in learning more than when she said that she wished she was back in my class, where she was told ways to improve her skills. She said she gets tons of work now, but the teacher does not give feedback as to which area she needs to improve on, and how. Butler and Neuman (1995), Cameron and Pierce (1994) and Kluger and deNisi (1996) make a case for the use of descriptive, criterion-based feedback as opposed to scoring or letter grades without clear criteria. Cameron, Pierce, Kluger and deNisi further add that “feedback that cues the individual to direct attention to self (praise, effort, etc) rather than to the quality of the task appear to have a negative effect on learning. Many studies speak to effective teachers praising less than average.” “There are three general sources of assessment evidence gathered in classrooms: observations of learning, products students create, and conversations – discussing with students. When evidence is collected from three different sources of time, trends, patterns and become apparent…This process is called triangulation” Davies, Anne (Making Classroom Assessment Work, 2000). There are two important opportunities I find helpful to improve my practice on assessment at my school. One involves a chance to “grade” students’ writing with other language teachers. The purpose of which was to assess our students using the 6+1 Traits rubric and see where we teachers needed to work more on to improve students’ writing skills. As it involves teachers comparing the grades we give to the same written piece on each trait, it gives me a chance to adjust the way I grade (where necessary), based on discussions with other teachers’ perspectives. This happens when there are big differences to the grades we assign any particular trait. The other opportunity I have to improve on assessment practice is the chance to present an assessment task to a small group of colleagues at school. Other teachers get a chance to “work” on the task and give feedback afterwards on the following areas: whether the students were properly prepared for the task or not, given the language skills and assumed prior knowledge of the target group; were directions clear; whether the task addressed the standards and benchmarks it was designed for or not, etc. From these discussions, I have the chance to make changes to the task where necessary. It also allows for an avenue to see other teachers’ perspective on things without fear of ridicule, focusing only on how to make assessment better for students.  According to Schmoker, 2001, as quoted from Ken O’ Connor, “When teachers collaboratively review assessment data for the purpose of improving practice to reach measurable achievement goals, something magical happens.”
 * 7 Ways to Assess Effectively FOR Learning ** [|by Hedda Tan]
 * 1 **** . Know your students well and how they interpret errors. **
 * 2. Be clear with your expectations, whether they are products, performance or any other evidence of learning. **
 * 3. Involve students in the assessment process. **
 * 4. Be a motivator. **
 * 5. Give effective feedback. Praise less and describe expected results more. **
 * 6. Use triangulation of evidence to base feedback on. **
 * 7. Be a reflective practitioner who is involved in professional dialogue. **

 A key aspect of Web 2.0 is describing content through folksonomies and evaluating content via user-rating. A folksonomy is a user-defined taxonomy – a taxonomy where the user defines the categories by making up a series of tags to describe information. User-ratings permit individuals to ‘score’ a piece of information (which might be a web page or an individual post in a blog). Tags are used for searching and there is normally a mechanism for displaying the most highly rated contributions. These features of Web 2.0 can be used in an assessment context. Tagging can be used to archive and retrieve assessment material (either from your own archive or from a shared archive). User-ratings can be used to identify the most popular sources of information or assessment items. Web service Example ||~ Possible uses ||~ Formative ||~ Summative ||~ Self ||~ Peer ||~ Group || Personal portal || Netvibes || Evidence organisation || x || x ||  ||   ||   || Calendaring || Google calendar || Assessment scheduling ||  || x ||  ||   || x || E-mail || Google mail || Communication with assessor Evidence storage || x || x ||  ||   ||   || Search engine || Live search || Evidence discovery || x || x ||  ||   ||   || RSS || Bloglines || Evidence discovery || x || x ||  ||   ||   || Newsgroups/forums || Google groups || Evidence discovery Peer support Reflection || x || x ||  ||   || x || Social bookmarking || Furl || Evidence collection || x || x ||  ||   ||   || Blogs || Wordpress || Reflection Log book/diary ||  || x || x || x || x || Online storage || Box.net || Evidence storage || x || x ||  ||   ||   || Photo storage || Flickr || Evidence storage || x || x ||  ||   ||   || Wiki || Pbwiki || Collaborative working Group work Projects || x || x ||  || x || x || Instant messaging || Live messenger || Authenticating evidence || x || x ||  || x || x || VOIP (incl. video) || Skype || Authenticating evidence Oral assessment || x || x ||  || x || x || Word processing || Google docs and spreadsheets || Collaborative working Group work Projects ||  || x ||  ||   || x || Spreadsheets || Google docs and spreadsheets || Result calculation and reporting Collaborative working ||  || x ||  ||   ||   ||
 * Table 1: How Web 2.0 can be used for assessment**||~

**Evidence cycle**
Collectively, the suite of Web 2.0 services provides a rich environment for finding, capturing, describing, organising and sharing evidence for assessment purposes. Web 2.0 services can be considered under these headings. Step in evidence cycle Web service || Evidence creation/discovery || Live search Bloglines Google groups Wikipedia Answers.com Google docs and spreadsheets || Evidence capture || Furl del.icio.us Clipmarks Google mail Flickr || Evidence organisation || Box.net Netvibes Flickr Blogger || Evidence sharing || Furl Clipmarks Box.net || For example, when undertaking an assessment, a student could use Live Search to search the world wide web for relevant information, subscribe to a number of RSS feeds using Bloglines to monitor appropriate websites, and check Wikipedia for appropriate articles. Relevant web pages could be saved using Furl or parts of web pages could be grabbed using Clipmarks. Google docs and spreadsheets could be used to pull together this information into an initial report, which can be stored online using Box.net. The whole project can be coordinated using a dedicated home page created using Netvibes, which would include RSS feeds, calendars, instant messaging, e-mail and a range of additional ‘gadgets’ relevant to the assessment task. Throughout this process, students can learn from one another by sharing their discoveries through such services as Furl and Clipmarks, which permit students to subscribe to one another’s archives – or rate archived material to identify the most relevant information.
 * Table 2: Evidence cycle**||~

**Validity and authentication**
Web 2.0 can also be used to aid validity and authentication. Validity relates to the effectiveness of the assessment to actually measure what it intends to measure. An important aspect of validity is the realism of the activities that learners are asked to do – the more realistic the activity, the more valid the assessment. Web 2.0 can improve realism by permitting learners to use real-life tools to perform real-life activities and create authentic artefacts. Learners will already use a range of Web 2.0 technologies in their everyday lives (such as Flickr and Gmail) – so the same tools used for assessment purposes will be natural and authentic – and encourage the use of existing artefacts (which may already reside in these archives) for assessment purposes. Authenticity relates to the ownership of the evidence – whether it is actually produced by the learner, or someone else. The inherent intimacy of Web 2.0 will give the assessor an insight into the mind of the learner that is often not possible in a conventional learning environment. The learner’s e-mail messages, forum contributions and blog posts will give a clear indication of the state of the learner’s current knowledge and skills – which will alert an assessor if their submitted work suddenly jumps in quality. More formally, technologies such as Skype permit remote oral questioning of learners to verify that the learner actually understands what s/he has submitted – which will give a good indication that they actually produced it.

**Web 2.0 versus VLEs**
Most of the facilities provided by Web 2.0 are provided in Virtual Learning Environments such as Moodle. There are pros and cons of using one in preference to the other. Advantages of VLEs over Web 2.0 Advantages of Web 2.0 over VLEs || VLEs provide a consistent user experience VLEs provide the same tools to all learners VLEs are not likely to go bust VLEs provide more control to teachers || Web 2.0 provides a rich range of services that are continually improving Web 2.0 provides choice to learners in the tools that they use Web 2.0 uses the same tools that learners already use Web 2.0 services are individually better than equivalent VLE services Web 2.0 is rapidly evolving || The main advantage of a VLE is its greatest weakness – the consistency of user experience. Whatever a learner does within a VLE, the look-and-feel will be similar. This consistency is comfortable for learners and (especially) teachers (who only need to learn one system) – but bland for learners since each tool will be inferior to an equivalent Web 2.0 offering. For example, even the most accomplished VLE e-mail system is no match for Gmail. Web 2.0 will become Web 3.0. The evolution of online services will continue and it is a moot point if VLEs can keep pace with these developments.
 * Table 3: Web 2.0 versus VLEs**||~

**Challenges**
The biggest advantage of Assessment 2.0 is its familiarity to learners. Assessment 2.0 is Life 1.0 to most young people – it’s what most learners use in their everyday lives. It’s the education system that’s different with our use of proprietary VLEs and commercial assessment systems. Assessment 2.0 presents challenges to the current system – some challenges are well known (such as the problems of plagiarism) and some not so well known (such as assessing groupwork). Web 2.0 (and, by implication, Assessment 2.0) is inherently collaborative – but existing assessment systems are inherently individualistic. Previous attempts at assessing an individual’s contribution to group work have had mixed success so it remains a challenge for the educational system to come up with a rubric for recognising group work and rewarding individual contributions. Educationalists are predicting much greater importance of informal learning in the future – and Web 2.0 will be used to capture much of that learning (in the form of MySpace websites, blog postings, photo archives and forum contributions). Assessment must evolve to recognise and appraise these resources.