Darren Bircher 20160650
Human Computer
Interaction
CO2702
Assignment Part 4
29/04/2007
Contents
Improvements
for the Prototype:
Users Chosen:
We didn’t want to
target any particular age range; this is because we wanted a variety of results
spread across a variety of ages. The most popular age range to be used as
testers was the 19-25 category; this was due to us having access to a large
number of university students. We specifically targeted users with glasses
because of the fact that the display was relatively small compared to some of
our previous designs so we wanted to learn whether or not it made a
considerable difference. Fortunately for us, none of our 30 testers commented that
the screen wasn’t large enough, or that the display/text wasn’t clear enough
for them to read. We also targeted users that had poor motor abilities and
reduced hand/eye coordination; this was to see whether or not the positioning
off the buttons was efficient. We tried to avoid users that had no form of
disability as we assumed that they would find it easy enough to use and it
wouldn’t allow us to gain accurate information on how effective the device is
for disabled/restricted users. Although testing the elderly would have given us
greater results in terms of how clear the layout/display was, we felt that it
too would be unnecessary because they didn’t have the desire to keep as
healthy/in shape compared to the younger generations. Once we had selected the
appropriate age ranges, specific user characteristics etc we set out and
started finding users to test.
In
conclusion, it was very important for use to ask many different people from a
variety of age groups as each individual will have different methods for
testing the device and may complete “unthought-of” tasks as well as different
ways of completing given tasks. Not everyone will use the device in the same
way; this is especially helpful for fault finding as well as gaining feedback
from their opinions of the technology.
Tasks Selected:
In order to learn
how efficient our device is we would have to test as many functions and
features as possible. We created a simple check list that we would use whilst
undergoing testing. There was a check list for the individual to complete when
we used the indirect approach and a check list for the direct approach also.
Both check lists contained the same criteria. The tasks that we selected varied
in difficulty: some were easy to complete whilst others were more challenging. The
tasks ranged from turning the device on and off, to navigating through the
device, using sound, using touch screen tools (e.g. pen and Message Ease).
Initially, we had planned to test the navigation, however we decided against
this as we wanted to find out how functional our device is. A list of the tasks
we provided for the users are listed below;
There could have been more tasks that
we could have asked the user to complete but after consideration we decided
against this because we wanted to see if the users discovered any other
features for themselves. Otherwise they may have just gone through the check
list and completed the tasks one by one and not looked for anything extra.
I have included three examples of the
direct and indirect check lists in the appendices.
In order to gain as much feedback
about our device as possible we decided to use a variety of evaluation
techniques. As we had 30 people to test the device we felt that to interview
each person would be too time consuming and would only be able to ask a short
amount of questions so that they’d be willing to participate. The answers had
to contain as much information as possible so we felt that it’d be best to
leave the questions open (e.g. finishing the question with ‘why?’ or
‘explain’). We decided to interview 10 out of the 30 users and ensure that the
interviews lasted approximately five minutes. The interviews were conducted
with the users before and after the testing. A couple of questions were asked
before to determine whether or not the user had any disabilities as well as
their expectations of the device and several questions were asked afterwards to
see if their disabilities affected their testing experience.
We also created questionnaire that
each individual would need to complete. The questionnaires contained 12
questions which were split into different sections (e.g. open-ended, scale, yes
or no). The questionnaires were specifically deigned to get as much information
from the user as possible, whilst remaining to be simple to complete. I have
attached three example questionnaires in the appendices.
As a group we decided to conduct
observations of the users testing the prototype. We split the users into two
groups so that we could complete both direct and indirect observations.
For the direct observations one
member from our group participated in testing the prototype with the user,
offering assistance where necessary. We had our own check list that we updated
once the user had completed each stage. We felt that using this method would be
beneficial for our research because we provided a natural environment for the
users (which resembled a workplace) and let them complete it in their own time.
Despite this, we did find our presence could be disturbing for the users
because in a natural environment they wouldn’t have a moderator standing over
their shoulders as they were using a nutritional PDA. We also encouraged the
users to “think aloud” so that it would provide important insight for us on how
the system worked for them. Because of the interaction between the user and the
member of our group it allowed us to gain feedback (through communication) as
each task was performed and completed.
We found that the indirect
observations made the users feel most relaxed. We simply gave them the check
list and asked them to mark down each task that they completed. Some users did
ask what they would do if they needed any assistance for any of the tasks
(because we hadn’t created a complete user manual). We provided them with a
handout of the presentation (from part 3 of the assignment) which showed the
user how they used the device. We provided notepads so that the user could make
a mini diary of what they found easy or more importantly, difficult. We did
consider using technology such as video/audio logging and eye tracking devices
but decided against this because we wanted to keep the environment as natural
and as realistic as possible.
So in total, our evaluation methods
included;
-
Interview
10 out of the 30 users.
-
Provide
a questionnaire for all 30 users.
-
Direct
observations, co-operative, think aloud
-
Indirect
observations, diaries, manuals.
The contents of the appendices
include;
-
3
x questionnaires
-
3
x interviews
-
3
x direct observation check lists
-
3
x indirect observation check lists
All of the attachments mentioned
above are completed by users. Other members of Team Tantastic have the rest of
the documents.
As mentioned earlier, examples of
textual reports are provided in the appendices in the form of questionnaires,
interviews and observation check lists. Before we created these questions,
check lists etc we had to consider how we would extract the results. For the
next part of this report I will cover the results of the questionnaire and of
the observations. I will not cover the answers of the interviews because they
are open-ended questions so could have a very large variety of answers.
The image above shows that we decided to test an equal amount of male and
female users. It would have been very easy for us to have asked 30 males with
being on an ICT course but we felt that we needed a range of results as it
wouldn’t be just male that would use the device in a real world scenario.
The image above shows the different age range, and the total amount of
users within each group. As you can see the most common age ranges were for the
younger generations; this was due to having easy access to users within these
groups. Ideally we would have liked to have included another age group which
would have been -12 because the majority of the -18 age range were actually 17
and 18. I feel that we would have had a more accurate range of results if we
had an equal amount of users from each group (e.g. 3 males and 3 females from
each). The only reason we didn’t do this was due to lack of access to people
within the different ranges, and the lack of mixture of sexes within the ranges
too.
The image above relates to a statement that I previously made; this being
that we were specifically targeting users with disabilites (primarily users
with reduced eye sight). This was because we felt that we would gain more
efficient results by testing the device with disabled users. I feel that the
results of this actually benefitted the group more because we could compare how
disabled users could operate the device effectively compared to non disabled
users.
The following results are what we considered to be the most important part
of the questionnaires; this is because the answers provided us with feedback
regarding how the users coped with testing the device, what they liked/disliked
and what they found to be efficient and most importantly, inefficient.
The above image shows how the users rated the navigation system for the
prototype. The were too main ways to navigate, these were: using the touch
screen pen or by using the directional pad. These results show that our methods
must have met the requirements of those that tested the device. As a group we
were very please that all of the answers were either “agree” or “strongly
agree”. This shows that we wouldn’t need to make any improvments to this area
of the prototype.
From all of the questions that we presented to the users this particular
area had the most concerning answers. Despite this, we didn’t feel that this
was too much of a problem; this was because the content of the prototype was
only to show the basic information that would be used within each area. The
finalised version of this device would include much more content so if we were
to retest the device then this would be an important area to improve upon.
These results proved that our design of the prototype was good enough to
meet the requirements of the users. Although 21 out of the 30 users rated the
layout as “strongly agree” there is still room for improvement to increase the
rating of the other 9 users. When questioned the users stated that the
improvements that could be made to increase the rating would be to produce the
device in a range of colours as they thought that the neutral colour was too
bland. In conclusion, I was very pleased with the results of this section as
this was once of the major areas for testing. This meant that the users were
satisfie with the design, colour, positioning of buttons/screen, size of
device, weight of device etc.
Almost two thirds of the users rated the display very highly. The area for
concern though was that one user disagreed with the statement. Although it is
only one user we would still prefer to meet the needs of all of the users. The
only complication of this is that if we changed the display to meet the
requirements of this one user then the others users ratings may decrease
because of the changes. The user stated that the backlight was too bright and
started to cause a strain on their eyes. A way to overcome this would be to
implement a button to increase or decrease the brightness of the backlight.
This was by far our greatest set of results, 28 out of 30 users strongly
agreed that the prototype was easy to use and the other 2 also agreed with the
statement. This meant that there wasn’t much space for improvement in this
particular area.
The above two diagrams really proved to our group that our prototype was a
success. Every user stated that they would continue to use this device if we
were to make a final design and would also reuse the software if it was
available on different devices. This opened up new ideas to the group in terms
of expansion of the nutritional information. If we’d of had testers saying that
they would not use the device or the software then it would of given us the
chance to ask why and to allow the user to suggest further improvements. So in
a way we were happy about the results, but would of also liked some feedback on
areas that may of needed improving, even if they were only slight improvements.
As stated in the results section above there was space for improvements
and possibly expansion, these were;
I have found that no matter how hard I try to meet the requirements of a
universal design that not everyone will be completely satisfied with it. I have
found that the most effective way to design and implement a project is to meet
the needs of the majority of the users because if I kept changing the design
around for the minority then it may start to move away from the requirements of
the majority.
If I was to redo this assignment then I would possibly complete further
testing to get a greater range of feedback so see if there was anything else
that I could improve upon. I feel that the prototype was a great success and we
were very fortuante that 28/30 were more than safisfied with it.
If I was to make any considerable changes then it would be to prepare and plan the project more efficiently so that we would have more time to implement the features in more depth. For example, each user stated on the observation check lists that the sound was not functioning correctly; this was because we hadnt implemented it thoroughly into the design.