by Brent Phillips
Master of Education course EDU5P00
Dr. Len Popp, Brock University
July 25, 1996
Ontario’s elementary classrooms of the 1990’s are a far different community than the classrooms of a generation ago. Society is demanding accountability from the educational system: parents want proof that their children are learning; Boards of Education want proof that their teachers are effective; the Ontario Ministry of Education wants proof that their policies are sound; taxpayers want proof that their dollars are not being wasted (Raphael, 1993). Teachers are expected to guide students’ learning in new directions with fewer resources while students are expected to become critical thinkers who are able to collaborate with their peers to solve complex problems affecting themselves and society (Ontario Ministry of Education and Training, 1995a). Clearly, education in Ontario is in the midst of sweeping change, while the central goal of education - student growth - remains constant.
The world has changed enormously in recent history, necessitating modifications to previous curricula and classroom practice. A shift in emphasis has resulted in students needing skills that will enable them to succeed in our constantly changing society (Ontario Ministry of Education and Training, 1995a). As curricula have changed, so have assessment methods.
This paper will examine various assessment styles to determine an appropriate framework from which to draw specific assessment techniques. After analyzing different assessment methods, recommendations will be made on how to best design and implement a new test to measure the computer literacy of elementary students.
The Role of Assessment
In order to improve something, change must occur. The process by which change may occur consists of three essential stages. First, an assessment of the current system needs to be undertaken. Data which reflect the present conditions in the system need to be collected. When talking about assessment a distinction must be made between assessment that is done for accountability or improvement of programmes, and individual student assessment (Earl, 1994; NCSET, 1992, as cited in Linn, 1993). Second, the collected data need to be evaluated. At this stage the data are analyzed to examine what is working well within the system, and what could be improved. At this point decisions are made on which aspects of the system need to be changed. The third stage involves the implementation of those decisions. Implementation occurs after the data has been evaluated and change is desired. When one examines this process, it becomes apparent that accurate assessment of the existing system is the critical starting point from which change will eventually occur. It is therefore imperative that the tools we use to assess student learning are reflective of the student’s actual level of performance. It would follow that any large-scale assessment attempts from the Ministry level should reflect the actual level of performance by the province’s students.
It may seem obvious to state that assessment must be reflective of a student’s ability, but the question remains on how to best measure that ability. To help answer that question, it is important to examine the differences between product-focused assessment and process-focused assessment.
In this type of assessment, what the student produces is what is assessed. Little, if any, attention is paid to how the student produced the end product. In effect, this type of assessment is an all-or-nothing proposition. Examples of this type of assessment would include the standardized exams that all Grade 13 students in Ontario wrote until 1968 (Raphael, 1993), the S.A.T. exams in the United States which universities and colleges still use to help determine admissions, Royal Conservatory of Music performance exams, and the S.R.A. Reading Lab series which was used to assess elementary reading comprehension. There is evidence that teachers will spend less time engaging students in higher-order thinking skills and will “teach to the test” when standardized testing is used for accountability and performance measures (Herman & Golan, 1991, as cited in United States Department of Education, 1996a). Unless a test is created which completely and comprehensively assesses all aspects of a student’s performance in a way which provides the teacher with accurate data on what the student is doing and why he is doing it, product-focused testing alone will not suffice. As of this writing, no such test exists.
Whereas product-focused assessment examines the final result of a student’s performance, process-focused assessment examines the entire performance. It is closely related to performance-based assessment, which can be defined as, “testing methods that require students to create an answer or product that demonstrates knowledge or skills” (United States Department of Education, 1996b). Metaphorically speaking, it is not just the destination, but the journey which is of importance. Studies have shown that teachers spend more time teaching students problem-solving and critical thinking skills when using performance-based assessment than they do when using more traditional methods of assessment (United States Department of Education, 1996c). In addition to these criteria, process-focused assessment goes even further by asking the questions how and why: why did the student choose to organize information in a certain manner; why did the student choose to do what he did next; how did he feel about the results of his decision; why did he feel that way? Process-focused assessment seeks to collect information which will help evaluators discover why the student is engaged in certain actions. One does not exclusively assess achievement at various stages on learning, one also assesses how the subject has progressed to each new level of discovery or performance. The central tenet which differentiates between process-focused assessment and product-focused assessment is that assessment using process-focused methodologies seeks to discover how a student is able to perform a task, while product-focused assessment only examines what the student is able to produce.
The Common Curriculum
As of September 1, 1996, educators in Ontario will be required to use The Common Curriculum: Grades 1-9 (Ontario Ministry of Education and Training, 1995a) to guide their planning and instruction in order to have each student achieve designated outcomes and essential learnings within specified time periiods. It identifies specific learning outcomes that every student in Ontario should achieve by Grades 3, 6, and 9. By the end of Grade 9, students should be able to demonstrate success in ten learning outcomes that the Ontario Ministry of Education and Training identifies as “essential” (Figure 1).
Figure 1. Ten Essential Learnings for Students by the end of Grade
|•solve problems and make responsible decisions using critical and creative thinking|
|•use technology effectively|
|•demonstrate an understanding of the world as a set of related systems|
|•apply the skills needed to work and get along with other people|
|•participate as responsible citizens in the life of local, national, and global communities|
|•explore educational and career opportunities apply aesthetic judgement in everyday life|
|•apply aesthetic judgement in everyday life|
|•make wise and safe choices for healthy living|
|•use the skills of learning to learn more effectively|
These outcomes are so openly worded that assessing them is difficult. Fortunately, the tools needed to successfully gather accurate information on a student’s level of achievement on these outcomes are available to teachers in the form of performance-focused assessment techniques. Product-focused assessment methods are neither sufficient nor appropriate for the common curriculum objectives. A closer look at the wording of these “Ten Essential Outcomes” reveals very process-oriented action words. Words such as “demonstrate”, “apply”, “communicate”, and “explore” paint a picture of a an active learner in a symbiotic relationship with those around him. It is within this context that assessment must be carried out.
The Justification for a Province-wide Ministry Assessment of Computer Technology Skills in Elementary Students
The Ontario Ministry of Education and Training has embarked upon designing, and implementing assessment tools for each of the four programme areas: The Arts; Language; Mathematics, Science and Technology; and Personal and Social Studies: Self and Society. It is hoped that these assessments will account for all aspects of the common curriculum, and be able to give a clear indication of growth patterns in Ontario’s students. Proficiency in speaking, writing, mathematics, scientific inquiry, and the understanding of one’s role in society are all necessary to succeed in the world today, and Ministry assessments will attempt to help track how Ontario is succeeding in producing students with these skills.
By dividing the common curriculum into four separate programme areas, and assessing them as such, the Ontario Ministry of Education and Training has effectively narrowed the range of knowledge, skills, and values it will assess. By doing so, it has neglected to signify the importance of an area critical for functioning in today’s society - computer technology. It is incomprehensible that information technology skills are addressed by only 15.85% (29 of 183) of the specific outcomes set forth by the common curriculum (Ontario Ministry of Education, 1995a). In fact, the words “computers” or “technology” are mentioned specifically in just 7.10% (13 of 183) of the outcomes! For an area of such societal importance, this lack of emphasis on computer literacy in the document guiding the intsruction of today’s children is disappointing.
The magnitude of the importance of computer literacy in today’s society is illustrated by Logan (1995) who states that, “the use of computers to communicate and process information represents a new language with which educators will have to deal....” (p.64). He explains how computing is a “fifth language”, joining speech, writing, mathematics, and science on the evolutionary chain of languages. He further postulates that as the information load from a new language increases to a level that is unsupportable by that language, yet another new language emerges that will enable better organization and additional storage of the increased amount of information being produced by society (Logan ,1995). In order to succeed in this changing world, students will need to learn this new language of computing.
The need for an assessment of computer literacy in Ontario’s students, in addition to the four programme areas of the common curriculum, becomes justified when one understands its importance in today’s society.
Considerations for an Assessment Tool for Elementary Computer Literacy
There are numerous considerations that need to be taken into account before embarking on designing and implementing a new assessment tool. Care must be taken to consider student diversity, economic conditions, political realities, role of the teacher, communication, and the interpretation and use of any new assessment tool (Earl, 1994). Before designing a specific assessment for computer literacy at the elementary level, one must take into consideration the applicability of various performance-focused assessment techniques in the context of computer literacy, factors that will affect the validity of collected data, the generalizability of any new assessment tool, the rate of technological progress, economic factors, and its relevance to classroom programme and instruction.
1. Applicability of various performance-focused assessment tools.
Recommendation: Self-assessment should not be part of a large-scale assessment undertaking.
The Common Curriculum emphasizes that students must take an active part
in the assessment of their own work, and that of their peers (Ontario Ministry
of Education and Training 1995a, 1995b). This method of assessment
should be an important component of any classroom assessment because it
can demonstrate the thought process that a child has engaged in during
learning and reveal information to the teacher which can help with programming.
For a relatively anonymous collection of data such as a province-wide assessment
however, student self-assessment would not serve as an accurate reflection
of ability. Most students at the elementary level do not possess the skills
necessary to assess their performance accurately.
Recommendation: Portfolios should not be used for assessing computer literacy in a large-scale assessment due to the sheer sample size and the time required to establish valid portfolios.
Portfolios, when done properly, can track a student’s progress quite
accurately. Unfortunately, many times portfolios are used as collection
boxes for product only. While this is fine for demonstrating whether
or not a child is progressing, it fails to help explain why they are or
are not progressing.
Recommendation: Journals are an effective method of assessment for the classroom teacher, but would not be appropriate for a large-scale assessment.
Journals are an effective way of having a student explain what he or she has accomplished, learned, taught to others, or attempted. Journals can be used effectively to assess the values that students attach to their learning. It is a reflective tool, but it requires sufficient writing skills to be successful.
Recommendation: Teacher observation should be the key component of a large-scale ministry assessment.
Teacher observation remains one of the most frequent and reliable methods of student assessment at the elementary level (Ontario Ministry of Education and Training, 1995a; Halton Board of Education, 1992). Through observation, a teacher can note the actions, or words, of a student and record them for future evaluation. It can be conducted with or without students being aware that they are being observed. Teachers can use guides, checklists, or sets of questions to keep their observations focused (Ontario Ministry of Education and Training, 1995a).
e. Conferences and interviews
Recommendation: Interviews should be part of a large-scale ministry assessment.
Conferences provide the teacher with a time to discuss ideas and explore the student’s thinking process and understanding relating to concepts that are being learned. Interviews consist of planned questions to which the student answers verbally. Teachers can structure student interviews to enable assessment of specific learnings.
2. Factors which will affect the validity of collected data.
a. Assessment results must reflect students’ actual level of achievement
It would be pointless to spend the time, money, and energy collecting information which, due to the design or inappropriate choice of the assessment tool, does not measure the actual range of students’ actual abilities. Care must be taken to ensure that any potential province-wide assessment tool does not degenerate into a standardized measure of student achievement because, “they work against upward mobility for many children whose real achievement is not reflected in these tests” (Meaghan and Casas, 1995, p.5). One must not lose sight of the fact that accurate assessment should be of benefit to the students as well as the bureaucrats.
b. Assessment results need to be reported clearly and accurately
In cases where data is to be collected by one person, then passed on for evaluation by another, it is imperative that the assessment is readable and understandable. In large-scale testing, where thousands of assessments may need to be evaluated, judges may not have the time to seek clarification of ambiguous data. Before any large-scale assessment based on process-focused techniques is undertaken, judges need to be confident that each teacher’s observations are comparable to how another teacher would have assessed the same student in the same situation. The Ministry of Education would have to implement guidelines detailing effective observation techniques in advance of the actual assessment. Teachers would have to made aware of what processes to observe and which learnings to assess.
The quality of the technology that is available to students varies so greatly between individual schools, and between Boards, that preparing a provincial assessment tool may seem an impossible task. The challenge lies in creating a tool that will be open-ended enough to allow for the variability of resources between classes, yet allow even the most technologically up-to-date classroom to have a chance to use their computers to their fullest potential. The assessment must be structured in such a way that it will be possible to measure a student’s ability and report it within the context of the resources available.
4. Rate of technological progress
Computer technology is advancing at a dizzying rate. New innovations and applications in computing are continuously pushing technology - and users - to the limit. Computers labelled as cutting edge two years ago are being now being described as merely adequate. Computers from five years ago are almost obsolete in terms of product and software support. Any assessment tool that is created must be treated as being continually “in development”. There is no way to know what skills will be valued two years from now, let alone five years from now. Any assessment of computer literacy must ensure that students and teachers report not only the skills that can be demonstrated, but also the values that the students attach to what they are accomplishing with technology. The Ministry must also be willing to spend the time and the money to keep the assessment tool current.
5. Assessment must be economical, both in terms of time and money
There is general agreement that process-focused assessment is a more accurate method of assessment, yet many teachers still use product-focused measures to help them evaluate student learning. There could be several reasons why product is sometimes the only method still being used in assessing students: it is easier to evaluate; it gives a specific score which can be compared more easily to past and future results; it is cheaper to administer on a large-scale basis; it can be assessed more quickly. These are the realities that must be balanced with the ideal. Computer literacy assessment needs to be undertaken in a manner which attends to cost and time. Proposed assessment tools will be deemed inoperable if the cost involved of conducting them is too great.
6. Assessment needs to be relevant
One must not lose sight of the fact that the reason for assessment is to monitor how instructional practices are affecting learning and student growth. Teachers report using assessment to both monitor the effectiveness of their instruction, and program for students (Halton Board of Education, 1992). They are accountable for their instruction in the classroom, and a good assessment tool will be one whose results will help a teacher improve. For this to happen, educators must “value the assessment as an accurate reflection of what students know and can do” (United States Department of Education, 1996c). Assessment will not lead to change if the teacher does not believe that the information is reflective of what actually happens in the classroom.
A New Tool for the Assessment of the Computer Literacy of Elementary Students in Ontario
Creating a valid assessment tool that will be reflective of student ability, cost effective, produce results which are relevant to the classroom teacher, and give the Ontario Ministry of Education and Training data on how well students are performing across the province in the area of computer literacy appears possible. The Ontario Ministry of Education and Training would be wise to adhere to its own recommendation that, “assessment must involve the use of a wide variety of methods so that the evaluations of students’ achievement is as accurate as possible” (1995a, p.21). Checklists, interviews, observation, and open-ended assignments would be useful methods of collecting data on computer literacy.
Listed below are criteria which this author believes should be a part of any attempt to create a province-wide assessment of elementary computer literacy, based on the analysis of various assessment techniques.
The assessment for elementary computer literacy should include:
1. An open-ended assignment which can be produced with the computer
technology available to the student.
The project should be able to be completed with any range of computer availability. It could be done exclusively with technology, or with minimal computer resources. The goal would be to have the student use information technology skills as much as possible given the resources that are available to complete the task. The student would be assessed throughout the entire assignment, with the data collected focused on the “why’s” and “how’s” of the process. The key to this component would be teacher observation. The teacher would observe the student at various stages during the task completion, and record observations of how the student is working through the problem for assessment. As well, a checklist of computer skills such as turning the computer on/off, printing a document, using the mouse to navigate menus, keyboarding, creating art using a drawing programme, using a database, and using a word processor should be designed and then provided for the teacher to note the specific skill level of the student using the technology. This would allow for student creativity in completing the task, while at the same time allowing the teacher to assess the specific skill level of the student. There would be two components to the teacher observation section - general comments, and specific abilities recognition.
2. A structured interview with the student.
The interview would follow the completion of the task. The focus of the interview would be a discussion of the student’s performance. The teacher would seek clarification of or explanations for behaviours observed during the task. Whereas the teacher observations would involve conjecture as to the reasons for a student’s actions, the interview could provide validation of those suppositions. The main goal of the interview would be to determine why the student performed the way he did.
The second part of the interview would include specific questions, provided by the Ministry of Education, relating to the student’s general use of technology. These would be questions which are not specific to the student’s just-completed task. There would be questions to assess the value the student places on computers (eg. Why do you like doing projects using the computer?), and the knowledge they have about computers (eg. How do you turn off the computer properly?).
In addition, specific outcomes that the elementary student should be
able to achieve need to be provided on the assessment tool for use during
both the interview and initial observation process, so that the teacher
can be watching for demonstrations of key learnings. The identification
of level of success would serve as a base for further development.
3. A commitment to re-evaluate of the assessment tool before each
round of data collection.
Any large-scale assessment process for computer literacy would need to undergo continuous re-evaluation to ensure that it is structured so that it can assess students’ abilities using the most current technology available in the classroom.
If it is indeed possible to implement a province-wide assessment of information technology skills at an elementary level, it could provide us with data which could be used to dispel the public view that our schools are stagnant “fact factories” which are resistant to change at all costs. In many classrooms, teachers are learning about technology along with their students, and in many cases it is difficult to tell who is the actual instructor - the student or the teacher. Traditionally, parents have been very concerned with reading, writing, and math skills because those are the “essential” tools that people need in order to be successful. By conducting a regular assessment of information technology skills, the government would be demonstrating that it considers computer literacy as important for students’ futures as the reading, writing, and math.
An assessment of this magnitude could also serve to identify demographic
trends in computer literacy that warrant further examination. It
may be discovered, for example, that specific Boards of Education are producing
students with exceptional computer skills. Collaboration between
Boards of Education could lead to more teachers having access to the strategies
that have produced the most effective student learning.
Because this proposed model of assessment relies heavily on in class observation of students, further investigation may be needed to identify the reliability of teacher observation in this context. There needs to be an examination into whether teachers have the skills and knowledge to effectively judge students using technology.
As well, an actual cost analysis should also be conducted to determine the economic feasibility of attempting such an assessment. It may be found that this model proves too expensive to administer, in which case modifications would have to be made to existing proposal.
There is no way of knowing what specific technology skills elementary
students of today will need in the future. What is known is that computers
have become integrated into most facets of our society and have begun to
be integrated into most subject areas in the classroom. Just as language
is a part of mathematics and science, and mathematics and science are parts
of language, so too has technology become integrated into many aspects
of learning. As educators, we need to know that our students are effectively
learning not only the traditional skills of reading, writing, and arithmetic,
but the new “basic” - computer literacy. The assessment of student achievement
in information technology will help Ontario’s teachers, students, and Ministry
of Education determine the effectiveness of existing technological programmes
|Product-based Assessment||Process-based Assessment|
|•passive learning||•active learning|
|•focus on individual achievement||•focus on individual’s role in group/society/environment|
|•teacher delivers curriculum through instruction||•teacher ensures each child learns curriculum|
|•assessment often irrelevant to learning||•assessment relevant to learning|
|•lower-level, simple||•more difficult and complex|
|•final product||•entire learning process|
|•does not address individual learning styles||•can be tailored to meet individual needs|
|•tangible results are valued||•explanations and demonstrations of learning are valued|
|•less costly to administer on a large scale||•more costly to adminster on a large scale|
|•less time required||•more time required|
Earl, L.M. (1995). Assessment and accountability in education in Ontario. Canadian Journal of Education, 20(1), 45-55.
Linn, R.L. (1993). Educational assessment: Expanded expectations and challenges. Educational Evaluation and policy analysis, 15(1), 1-16.
Logan, R.K. (1995). The fifth language. Toronto: Stoddart Publishing Co. Limited.
Meaghan, D., & Casas, F. (1995, October/November). Let’s look before we leap: Standardized testing and special education students. Canadian Education Association Newsletter, 5.
Halton Board of Education. (1992). Student assessment and evaluation - a review of current practices. [Research Summary]. Burlington, ON: Author.
Ontario Ministry of Education and Training. (1995a). The common curriculum: Grades 1-9. Toronto: Queen’s Printer for Ontario.
Ontario Ministry of Education and Training. (1995b). The common curriculum: Provincial Standards Mathematics, Grades 1-9. Toronto: Queen’s Printer for Ontario.
Raphael, D. (1993). Accountability and educational philosophy: Paradigms and conflict in Ontario education. Canadian Journal of Education, 18(1), 29-45.
Unites States Department of Education. (1996a, Spring). Assessment requirements under title 1 of the elementary and secondary education act. Improving America’s School: A Newsletter on Issues in School Reform [On-line]. Available WWW: http://inet.ed.gov/pubs/IASA/newsletters/assess/pt2.html
Unites States Department of Education. (1996b, Spring). What are promising ways to assess student learning?. Improving America’s School: A Newsletter on Issues in School Reform [On-line]. Available WWW: http://inet.ed.gov/pubs/IASA/newsletters/assess/pt3.html
Unites States Department of Education. (1996c, Spring).
What the research says about student assessment. Improving America’s
School: A Newsletter on Issues in School Reform [On-line]. Available