Running head:  EVALUATION PLAN                  

 

 

 

 

 

 

 

 

Evaluation Plan for a Course Dealing with People First Language

Janet Bowen

ED7505/Final Project/Capella University

Dr. Sonja Irlbeck

March 12, 2005

 

 

 

 

 

 

 

 

 

 

 

 

 

Introduction

            There are many individuals with disabilities dependent on service providers and support staff in order to lead somewhat independent lives within the community. Additionally, individuals with disabilities enter the community interacting with the members of the community, many times on daily basis. An issue not often addressed is the affects of language on individuals. One of the ways to address this issue is an online course exploring the issue and providing information on People First Language and disabilities. Once training has occurred, it must be evaluated for effectiveness and worth the effort put into it. This paper discusses some ways of evaluating the course.

            Looking at the audience, which according to Mager (1997) is a key factor when choosing a delivery method, issues that need addressing include accessibility for this population. The audience has a variety of abilities and learning styles. While the majority of the course is geared at helping the support staff understand some of the issues surrounding individuals with disabilities, hopefully some individuals with disabilities will take advantage of this learning module to have a better perspective on their own disabilities and that of others. Mager (1997) also lists as prerequisite for the delivery method choice the development of objectives, content and materials to be used for the course.

            This course is a short online training module dealing with People First language. It is to provide both information about the use of People First language, information about various disabilities and how self-esteem is affected by language usage. The course is for the support staff that works in residential facilities, independent living facilities, day programs and other areas where support staff are present. Majority of individuals currently working in these facilities have limited educations, with the average education level being a high school graduate or equivalent. This course will be evaluated using a variety of evaluation tools.

            The development of the tools for evaluation is based on best practice for evaluation. The four levels that Kirkpatrick (1998) discusses are included. The evaluation process discusses each of the levels, with the inclusion of Return on Investment (ROI). Other forms of evaluation discussed are the skills, knowledge, attitudes and organization presented by Mager (1997). Another form of evaluation to be examined is the delivery method as suggested by Mager (1997). Evaluation of training or learning module needs to be understandable by the evaluator, in order to insure appropriate, accurate answers. Blunt (2005) developed evaluation comparable to Kirkpatrick's (1998) level four evaluation using descriptive adjectives.

For this, Blunt encourages the learners to help him develop the evaluation by asking them for the descriptive adjectives. The involvement of the learners in the development creates a buy in that will translate into more responses. Kirkpatrick (1998) holds that in order to achieve a good evaluation one hundred percent response is needed. This is a difficult task to achieve, so it is suggested that incentives be presented to encourage participations (Indiana University, 2004). In order to develop proper evaluation tools, the purpose of the evaluation has to be decided. Once decided on, the evaluation development will follow a prescribed manner to evaluate the specific items.

Methods

 

Deciding what is to be evaluated is the first step of the development process. As the development of the course continues, evaluation tools should be included. Looking at whether this is a valid course is done through evaluation at different levels. Management looks at these evaluations in order to justify whether there is a need for the training and if the effort is worth the cost. These are all part of sound instructional design.

Each of the following tools looks at a specific area of the training. From these evaluations, stakeholders can determine if the cost of providing this training is worth the investment. The goal of these evaluations is to show the worth, and if there is a problem adjustments can be made. Ultimately, the analysis of the information gained from the evaluations must be accurate or there is a failing in the development (Clark, 1995).

Tools

 

One item looked at is whether the objectives are being met by the delivery. In order to do this things to be evaluated are skills, knowledge, attitude and organization (SKA). Evaluation of these items is in the setting of the desired outcome and goals of the training. The tool chosen to evaluate the SKA is found below. Remembering the description of the course, these items tell whether the goals and outcomes are appropriate. The rationale behind the development of this tool is to see how this applies to this course specifically

            The evaluator will use the following tool to evaluate whether the item answers the goal described. The evaluator will be encouraged to add any comments that will clarify the response to the question. All the information gathered with this evaluation tool will help in determining whether the development of the course is appropriate.

The questions that require a subjective view or judgment based on the perceptions of the evaluator will vary by evaluator. These influences will show up in evaluation questions that especially deal with presentation or perceived expressions (Arbuckle & Williams, 2003). Because of this added influence, the evaluator must take an objective view of the results in order to find a use for the information. The questions that include a "met expectations" will be subjective and vary in answers.

 

Those questions that are quantitative in nature then a more objective evaluation will occur. Reeves (1996) and Leonard (n.d.) both discuss the need to eliminate some of the subjective influences from the evaluation. The wording of questions can determine whether the answers are influenced by a perception. Questions that ask for things like whether the information was understandable or not, have a subjective background the overall look at the results will determine whether the information is presented in an appropriate manner. The outcomes will not vary much across the board for various evaluators.

Figure 1:

SKA Checklist

 

Checklist

Skill:

Yes

Explanation

No

Explanation

  1. Are the skills taught appropriate for the outcome?

 

 

  1. Is the prior knowledge or prerequisite skill present or does it need to be taught?

 

 

  1. Are these skills relevant to the learner's environment?

 

 

  1. Can these skills be used in the work place?

 

 

  1. Is there follow up training or re-evaluation of skill performance within a period?

 

 

  1. Does this address the target audience?

 

 

  1. Is evaluation and assessment for this skill appropriate and correct form?

 

 

 

 

 

 

Table 1 cont:

SKA Checklist

 

Knowledge:

Yes

Explanation

No

Explanation

  1. Is this appropriate knowledge for the desired outcome?

 

 

  1. Is this knowledge building on prior or general knowledge available to all participants?

 

 

  1. Is the knowledge relevant to the learner's environment?

 

 

  1. How will this help learner transfer skills to work environment?

 

 

  1. Is this current and valid information?

 

 

  1. Is all the relevant information for this course available within the course?

 

 

  1. Are the assessments assessing the correct knowledge?

 

 

Attitudes:

Yes

Explanation

No

Explanation

  1. Who is presenting the information/learning module?

 

 

  1. What are the biases of the content?

 

 

  1. Do the biases affect how the learner understands the information/content?

 

 

  1. Is this a learner centered or instructor/instruction centered content?

 

 

  1. What is the purpose of the learning module?

 

 

  1. What reasons are being used to justify this learning module?

 

 

  1. What learner characteristics are being taken into consideration?

 

 

 

 

 

 

Table 1 cont:

SKA Checklist

 

Organization:

Yes

Explanation

No

Explanation

  1. Is the organization of the material done in a logical manner for the learner?

 

 

  1. Does the organization of the content build on previous knowledge or learning?

 

 

  1. Is access to the information, help and other items consistent throughout the training?

 

 

  1. Is the organization of information easy to understand and make sense?

 

 

  1. Does the organization encourage learning?

 

 

  1. Is there a built in feedback for the learner?

 

 

  1. Is the feedback timely and appropriate to the learner and the course?

 

 

  1. Are there clearly state objectives and goals for the course present?

 

 

 

Mager (1997) encourages the development team to analyze the delivery method at the beginning of developing the course. The delivery method does not determine the content, but does influence how it is presented (Belanger & Jordon, 2000). The next tool looks at evaluating the delivery method purposed for the course on People First Language. The purposed delivery method is WBT or internet delivery. This tool looks to evaluate the course, content and material in relationship to the delivery method.

The evaluator is asked to provide information including the course name, date of evaluation and the course instructor. They are also provided the following instructions: Please mark the answer appropriate for the question. Answer any extra questions that are part of the question. Please add any comments to the end of the evaluation. From this evaluation, the team can decide if there are other options to delivering the course or if an alternative means is needed.

Table 2:

Delivery Method Evaluation

 

Delivery Method Evaluation

 

Yes

No

Unsure

Objectives

 

 

 

Does the delivery method allow the objectives to be met? If not what is needed?

 

 

 

 

 

Does the delivery method hinder some of the objectives? If yes, how?

 

 

 

 

 

Does the delivery method allow for feedback on the objectives?

 

 

 

Does the delivery method allow for clear assessment of the objective?

 

 

 

Does the delivery method allow practice to meet the objective?

 

 

 

Learners

 

 

 

Is the delivery method accessible to all the learners?

 

 

 

Is the delivery method compatible with adaptive equipment, i.e. Screen readers, CCTV, aides, word prediction, scanners?

 

 

 

Is the delivery method appropriate for the learners? i.e. blind, deaf, physically impaired, learning disabled.

 

 

 

Is the delivery method able to accommodate a variety of learning styles? I.e. Auditory, visual, kinesthetic

 

 

 

Does the delivery method allow the learner to interact with the instructor and classmates?

 

 

 

 

 

 

 

Content

 

 

 

Does the delivery method allow the content to be presented in different ways?

 

 

 

Does the delivery method help reinforce the content?

 

 

 

Does the delivery method allow for content and materials to be used without revision or loss?

 

 

 

Does the delivery method allow the content to be updated or modified?

 

 

 

Does the delivery method allow for adding outside information to the content?

 

 

 

 

 

 

 

 

Table 2 cont:

Delivery Method Evaluation

 

Cost

 

 

 

Is this delivery method the cost efficient?

 

 

 

Is there another delivery method that will meet most of the criteria?

 

 

 

Is there a way to deliver the training that meets the needs and is less costly?

 

 

 

 

 

 

 

Presentation

 

 

 

Does the delivery method allow for inclusion of different media for presentation?

 

 

 

Does the delivery method help in the selection of media for presentation?

 

 

 

 

 

The previous evaluations have looked at delivery method and how the course will meet the objectives of the course. These are part of the analysis portion of instructional design (IEEE, 2002). Having analyzed the need, the development of the course occurs. After development the course is implemented, but evaluation does not stop there (Mager, 1997). Continued evaluation needs to occur so that the course offering remains current and is meeting the needs of the organization (Kirkpatrick, 1998). While different evaluations can be performed, Kirkpatrick (1998) offers four levels for evaluating the course after the learner has taken it.

Kirkpatrick's Levels of Evaluation

 

Kirkpatrick's four levels of evaluation are Level 1 reaction, Level 2 learning, Level 3 transfer and Level 4 results (Winfrey, 1999). Each of these levels looks at a specific item within the course. Each of them is important in creating and maintaining a viable learning module. In addition to these evaluations, a fifth evaluation is used. This evaluation looks at the return on investment (ROI), an indicator mainly used to justify the implementation of a training course.

The next section of the paper looks at these forms of evaluation. The first evaluation form looked at is Level 1 reaction. This evaluation targets the reaction of the learners to course, material and instructor (Kirkpatrick, 1998). Winfrey (1999) states the outcome of this level of evaluation has an impact on the outcome of Level 2 evaluation. From this evaluation level, information on how well the student liked the course on a reactionary level is gained (Kirkpatrick, 1998). The tool below is an example of Level 1 evaluation of the People First Language course.

The evaluator is given the instructions to fill out the form in the following manner: Please take the time to evaluate the training module in which you just participated. Circle the number that best describes how you feel about the question. 5 is the most satisfactory. 1 is the least satisfactory. Please add any comments that you feel will help improve this course. In addition to the instructions the evaluator is asked to provide information about course title, instructor and the facility were presented.

Table 3:

Level 1 Reaction evaluation

 

Instructor Evaluation

The instructor

Satisfaction level

Comments

1… knew the information well enough to teach it.

5  4  3  2  1

 

2…encouraged me to participate.

5  4  3  2  1

 

3… provided timely and relevant feedback to me.

5  4  3  2  1

 

4. .. answered my questions/concerns in a timely manner.

5  4  3  2  1

 

5… encouraged discussion on subject matter.

5  4  3  2  1

 

6… expectations were clearly stated.

5  4  3  2  1

 

7. .. ability to lead the course met my expectations

5  4  3  2  1

 

 

 

 

Table 3 cont:

Level 1 Reaction evaluation

 

Content Evaluation

Satisfaction Level

Comments

1. Content was easy to understand.

5  4  3  2  1

 

2. I can use the information at work.

5  4  3  2  1

 

3. I was able to learn something I did not know before.

5  4  3  2  1

 

4. I gained a better insight into the issues presented.

5  4  3  2  1

 

5. I understood the information the way it was presented.

5  4  3  2  1

 

6. Expectations and outcomes were clearly stated.

5  4  3  2  1

 

7. The training met the expectations that I had.

5  4  3  2  1

 

Facility Evaluation

The facility

Satisfaction Level

Comments

1. … was easily accessible.

5  4  3  2  1

 

2. … had the equipment needed to access the training program

5  4  3  2  1

 

3. … had a help desk/technician for problems.

5  4  3  2  1

 

4. …is comfortable and encourages learning

5  4  3  2  1

 

Further Comments

 

Please complete the following statements so that we have a better idea of how to change this course to help other learners:

 

  1. I liked this about the course. _______________________________________________________________________
  2. I did not like this about the course. _______________________________________________________________________
  3. I will use the information I learned at work by. _______________________________________________________________________

 

 

 

 

 

 

From this evaluation the designer can change some of the presentations if necessary. The designer also has information about things outside of their control like the instructor's abilities. The next step in the evaluation process is a Level 2 evaluation of learning. The learning evaluation looks at learning from the definition of an observable change in behavior. In order to perform this type of evaluation, it is optimal that both a pre and posttest occur.

The next table is a Level 2 evaluation designed for the course discussed in the paper. Both of these following surveys are given at the beginning of the course and again at the end of the course. This gives information to both the learners and the instructor and gauges what is learned and gaps in knowledge. Included with the evaluation are instructions with a rationale of why the learner would want to do the assessment. It is as follows:

Please answer the following questions by circling the correct answer. Then answer the why question with a short explanation. All the answers are confidential and will not be shared with others. This will help the instructor provide a good course. It will also help you see what you have learned.

 Table 4:

Level 2 Learning evaluation part 1

 

Self Attitude Survey

Yes/No questions.

1.  The words I choose affect other people

Yes  

 

 

No

Why?

2. It is better to refer to a person as handicapped than disabled

Yes

 

 

 

No

Why?

 

 

 

Table 4 cont:

Level 2 Learning evaluation part 1

 

4. I have to be careful about what I say about a person

Yes

 

 

 

No

Why?

5. It is better to say the person in the wheelchair rather than the wheelchair bound person.

Yes

 

 

No

Why?

 

 

In addition to this evaluation form is a second that is more of a self-assessment looking at what the learner already knows about the subject. It also has instructions like what follows.

Please answer these questions with a short answer. If you are unsure or do not know the answer check that column. Like the self-assessment, this helps the instructor provide a good course.

Table 5:

Level 2 Learning evaluation part 2

 

What I already know

Short answer questions

Question

Answer

Do not know

1. People First language means….

 

 

 

 

 

2. We should use People First language because…

 

 

 

3. Some disabilities are…

 

 

 

 

 

 

 

Table 5 cont:

Level 2 Learning evaluation part 2

 

4. Some of the jobs, people who work in the field of disability rights have are…

 

 

 

5. Some organizations that support individuals with disabilities are….

 

 

6. The first thing I see when I meet a person is…

 

 

7. I meet a person using crutches, the first thing I notice is …

 

 

 

 

 

The next level of evaluation looks at the transfer of the learning to the work place. According to Winfrey (1999), some as the truest assessment of the trainings effectiveness view this level. If the learning that took place does not transfer into the work environment, while not being worthless or useless, it is an added burden on the learner who is already faced with many stresses from the job. This evaluation looks at the training and its transfer to the situations in which People First language has a good deal of impact.

The following survey will be sent to the trainees, their supervisors and depending on their employment status, the individuals that they provide support to. To achieve a hopefully unbiased evaluation there is a possibility that service case managers would also receive the survey in order to see if the behavior changes are obvious. The case managers see the interactions in a broader view and may be more unbiased in observation.


Table 6:

Level 3 Transfer evaluation

 

Specific Task

Prepared

Use

Importance

 

How well did the

course prepare person to perform this task?

How often does

this knowledge

or skill get used on the job?

How important is this skill or  knowledge to the job?

Uses People First Language in interactions with consumers

 

Poorly

Somewhat

Very well

Seldom

Sometimes

Very often

Not at all

Somewhat

Very much

Is more considerate of consumers feelings and rights

Poorly

Somewhat

Very well

Seldom

Sometimes

Very often

Not at all

Somewhat

Very much

Has a better understanding of the disabilities that the consumers have

Poorly

Somewhat

Very well

Seldom

Sometimes

Very often

Not at all

Somewhat

Very much

Can help the consumers to lead independent lives

Poorly

Somewhat

Very well

Seldom

Sometimes

Very often

Not at all

Somewhat

Very much

Is providing opportunities for the consumers to explore their potential

Poorly

Somewhat

Very well

Seldom

Sometimes

Very often

Not at all

Somewhat

Very much

Gives consumers opportunities to express their rights as individuals

Poorly

Somewhat

Very well

Seldom

Sometimes

Very often

Not at all

Somewhat

Very much

Is able to perform duties better by using People First Language

Poorly

Somewhat

Very well

Seldom

Sometimes

Very often

Not at all

Somewhat

Very much

 

 

These evaluations have been looking at the learning and how it is used in the work place. The Level 3 evaluation is usually administered a short period after the training occurred. The next level of evaluation is done up to six months after the training. This provides information that managers and those in power consider important like increased production, increased quality and decreased cost (Winfrey, 1999).

This assessment will be done two separate times. The first is an analysis to determine the need for the training; the second will be done six months after the initial training to see if there has been improvement. The comparison between the first and the second will indicate whether there was a benefit achieved from this training. If possible a third time would be done a year out from the training to see if there was retention. Kirkpatrick (1998) suggests that time be allowed for the results to be achieved. This is why there is such a gap between the training and the first evaluation after the training. Since some of the individuals using the evaluation may be considered low literate some of the evaluation will employ the Blunt instrument, which is the use of descriptive adjectives to gain a better perspective of the results.


Table 7:

Level 4 Results evaluation

 

Organizational Result

Rating

Use People Language daily on the job

Any of the following occurs but not limited to:

-Person comes before the disability when discussing consumer

-References to consumers encourages empowerment of the consumer

-Supportive rather than destructive comments are being used in interactions

Strongly disagree

Disagree

Don't know

Agree

Strongly agree

Use knowledge about disabilities on the job daily

Any of the following occurs but not limited to:

-General limitations are based on disability's actual limitations

-Approaches to interactions based on disability influences

-Activity suggestions influenced by known disability limitations

Strongly disagree

Disagree

Don't know

Agree

Strongly agree

My attendance improved

Any of the following occurs but not limited to:

-Attendance is consistent with schedule

-Willing to help out in emergencies

-On time and stays for full shift

Strongly disagree

Disagree

Don't know

Agree

Strongly agree

Professional interactions improved

Any of the following occurs but not limited to:

-Professional demeanor is present

-Interactions with all individuals is professional

-Documentation reflects professionalism

Strongly disagree

Disagree

Don't know

Agree

Strongly agree

Organizational Result

Rating

Interactions with consumers improved

Any of the following occurs but not limited to:

-Open communication is occurring

-Reasonable explanations are given when requested by consumers

-Consideration of the consumers needs is present

Strongly disagree

Disagree

Don't know

Agree

Strongly agree

Table 7cont:

Level 4 Results evaluation part 1

 

Interactions with service providers improved

Any of the following occurs but not limited to:

-Professional interactions occur when interacting with service providers

-Consumers needs are addressed professionally with service providers

-Concerns about consumer issues are professionally addressed

Strongly disagree

Disagree

Don't know

Agree

Strongly agree

More job satisfaction present

Any of the following occurs but not limited to:

-Fewer absences because of stress related health issues

-Employees appear to enjoy their job

-Less conflict between employees, supervisors and consumers

Strongly disagree

Disagree

Don't know

Agree

Strongly agree

Attitudes have changed positively

Any of the following occurs but not limited to:

-Empowerment of the employees

-Empowerment of the consumers

-Problem solving issues have diminished

Strongly disagree

Disagree

Don't know

Agree

Strongly agree

 

Table 8:

Level 4 Results evaluation part 2

 

Organizational Results

Rating

The job

Like

Bored

Interesting

Dead-end

Helpful

Unsuccessful

The  work environment

Interesting

Uncomfortable

Engaging

Dangerous

Surprising

Worthless

 

Table 8:

Level 4 Results evaluation part 2

 

The interactions with consumers

Interesting

Uncomfortable

Educational

Boring

Worthwhile

Unsuccessful

The interactions with supervisors/other co-workers

Cooperative

Unsuccessful

Friendly

Tense

Thoughtful

Uninformative

The interactions with consumer's family members

Friendly

Tense

Informative

Difficult

Cooperative

Unresponsive

 

 

Kirkpatrick's levels address much of the information that is needed to assess whether a course is viable or not. Taking this information one step further, the development of a return on investment (ROI) puts a dollar amount on these items. Soft data elements are more difficult to quantify. These elements are things like work environment, job satisfaction, professional and personal skills improvement, self-esteem improvement of consumers, more ability to be self-determining for consumers and better relations with community and family members. These items are difficult to put a cost amount to (Labor Statistics, 2004).

Hard data revolves around actual wages that is required in pay for employees to participate in the training. The training can be scheduled at a time that limits the need for replacing a number of employees in the support staff position. Additionally, the cost of developing the training can be determined using a Proof of concept document that should provide the total cost of the development. This hard data can be quantified easily. The actual cost of training new employees is a cost that has been documented over the years of the organization another hard element that is quantified.

An ROI devised for the course of People First Language shows that the total benefits of the training is $35,600.00. The total cost was figured at $11,676.00. This provided a net potential benefit of $23,924.00 with the net potential ROI in percentage at 204.9%. The data for quality of interactions, productivity and less absenteeism is based on a small sampling of historical data that was available. While revision of the course material is a possible cost factor, it was not figured into the ROI at this time since the resources used are fluid in nature and provide the updating in an automatic manner.

Conclusion

Using all of these evaluations in the instructional design process, helps the instructional designer to create a training course that is beneficial for all the participants. Examining each of these items from the perspective of what the evaluations look at gives the designer an idea of what is working and what needs to be worked on. The data collected needs to be analyzed properly or all the hard work done in the development of the tool goes to waste.

Looking at each of these tools indicates a variety of things. It is a way for insuring validity of the course. After a course has been developed and deployed, it is important to revisit it regularly to validate it for content and organization. This is important for a simple reason, information changes. As new studies are done, new information is developed. In order for the course to be relevant to the learner, the information needs to stay current. Providing content that is correct and current is of prime importance for a learning module.

Instructional design if done properly requires that ongoing evaluation occur before, during and after development. The uses of tools that look at a variety of issues give a better idea of overall effectiveness in the training. Carr (2002) discusses how these tools play into the overall evaluation and which of the two areas they evaluate. Making sure that the proper tool is used is important in assessing the information in a manner that is useful to the design and development of the training course.

Evaluation helps to establish the soft data information about the work place showing whether improvement occurs or not. When the employee is satisfied with their work environment, there is less time lost because of absenteeism. There is also greater satisfaction from the consumers, which encourages them to become more self-sufficient. The ripple affect that occurs from improving the self-esteem and improved professional and personal skills can be seen as the consumers make more self-determining choices (Nerney, 2005).

The workplace that is considered safe increases accountability making the employee willing to accept responsibility for the way the organization is seen. Improved work environment reduces conflict between employees, supervisors and consumers (Unlimited, 2004). It also empowers the employees allowing them the self-confidence to let the consumer become more empowered (Nerney, 2005).

While evaluations like Kirkpatrick's four levels can show the effectiveness of training, designers need to be aware of other influences that can affect the outcomes shown with the evaluations. Items other than training that could affect the actual or projected improvements could include things that involve management changes, policy changes or wage changes and advancements in the hierarchy of the organization. Any one or all of them together can change the environment of the workplace. The change in the work environment can affect things like absenteeism, job satisfaction and corporate loyalty. Awareness of these influences on the outcomes is important to the design.

Not only can these be positive influences but they can also be negative. The training could have been very effective as is, but the negative influences found in the organization may show continued losses. When looking at the ROI, management needs to take into consideration as well these external factors to the training outcome. All of these items help with determining the worth of the training.

Important to remember is that evaluations, if improperly done will not aid in the design. In addition, there is the need to analyze the information gained from the evaluation. If the evaluations are not properly analyzed, the design can become flawed. Remembering what the evaluations are looking at and adjusting to the evaluation to gain the proper information is important.

Evaluations aid in developing the overall design. Making good use of them are tools that a designer has to insure a good product. Kirkpatrick's levels look at different aspects of the learning module. The ROI looks at the dollar amount placed on the results of Kirkpatrick's levels of evaluation. All of these together give a picture of what value the training has. The analysis of the delivery methods and SKA's determine the need for the training to begin with as well as the method for delivery.

 


References

Arbuckle, J. & Williams, B. D. (November 2003). Students' perceptions of expressiveness: age and gender effect on teacher evaluations. Sex roles: A Journal of Research. [Online] Retrieved January 18, 2005 from http://www.findarticles.com/p/articles/mi_m2294/is_9-10_49/ai_110813272

Belanger, F., & Jordan, D. (2000). Evaluation and implementation of distance learning: Technologies, tools, and techniques. Hershey. Idea Group Publishing.

Belderain-Caputo, Y. (2004) ROI for ED 7505. Retrieved from the Capella site Unit 8 Discussion February 22, 2005

Blunt, A. (February 2005) A Blunt instrument for use by low-literate participants in summative and formative evaluations of adult education and development programs. Adult Education Quarterly. Vol. 55. Iss. 2. 129-149. Capella University Library. Retrieved February 14, 2005 from Capella Library online at EbscHost.

Carr, W. F. (2002) Designing an effective training evaluation process. Retrieved January 2, 2005 from http://www.ispi.org/pdf/suggestedReading/Carr.pdf

Clark, D. (1995) Developing instruction or instructional design. Retrieved January 2, 2005 from http://www.nwlink.com/%7Edonclark/hrd/learning/development.html

IEEE. (2002) Reference guide for instructional design and development. Retrieved January 2, 2005 from http://www.ieee.org/organizations/eab/tutorials/refguide/mms01.htm.

Imperial Consulting Corporation. (n.d.) Return on investment worksheet. Retrieved February 25, 2005 from http://www.imperialcorp.com/roi.html  

Indiana University Advanced College Project (2004) Advanced College Project end-of-course evaluation. Retrieved January 17, 2005 from http://www.indiana.edu/~acp/pdf/teachereval.pdf.

Kirkpatrick, D. L. (1998). Evaluating training programs: The four levels. (2nd. ed.). San Francisco: Berrett-Koehler.

Mager, R. (1997). Making instruction work. (2nd ed.). Atlanta: The Center for Effective Performance.

Miner, N. (1998) Level 2 Evaluation Primer. Retrieved January 25, 2005 from http://www.trainingdr.com/articles/level2eval.htm

Nerney, T. (2005). Communicating self-determination: Freedom, authority, support and responsibility. Center for Self-Determination. [Online] Retrieved February 14, 2005 from http://www.self-determination.com/publications/tools.html

Reeves, T. (February 21, 1996) Educational paradigms. University of Georgia list-serv. [Online] Retrieved January 18, 2005 from http://www.educationau.edu.au/archives/cp/REFS/reeves_paradigms.htm.

Unlimited Coaching Solutions Inc. (2002-2004) Value of training. Retrieved February 14, 2005 from http://www.unlimitedcoaching.com/value_of_training.html

U.S. Department of Labor: Bureau of Labor Statistics. (February 27, 2004) Cost estimator. Retrieved February 22, 2005 from http://www.bls.gov/oco/ocos006.htm

U.S. Department of Transportation. (n. d.) Training evaluation guide. Retrieved February 7, 2005 from http://dothr.ost.dot.gov/HR_Programs/Learning___Development/LDFramework/EVLGUIDE.pdf

Wang, G. (2003) Value learning: Measurement journey. Educational Technology. Vol. 43. Iss. 1. p. 32 [Online] Retrieved February 14, 2005 from http://www.jmu.edu/wdc/news/wang_article.pdf

Winfrey, E.C. (1999). Kirkpatrick's Four Levels of Evaluation. In  B. Hoffman (Ed.), Encyclopedia of Educational Technology. Retrieved January 25, 2005, from http://coe.sdsu.edu/eet/Articles/k4levels/start.htm