Comparison Technique
The approach towards comparing any two products is start with identifying the objective of it (i.e. what you want). Once you identified the objective of your comparison, selection of what kind of approach will be used to compare two product is also play a major role. Usually, every comparison should have user survey to support the field data collected in the process. One must also identify clearly in which role he or she lies while comparing two products. For example, students role, researcher role, teenage role etc. After that, we need to find out the ease of use of the products and efficiency of the two product. This can only be done by side by side comparison between the two product using the questionnaire survey. ( i.e. A is better than B).
Experimental Design
Experimental designs are often touted as the most "rigorous" of all research designs against which all other designs are judged. If we can implement an experimental design well, then the experiment is probably the strongest design with respect to internal validity since internal validity is at the center of all causal or cause-effect inferences. When we want to determine whether some program or treatment causes some outcome or outcomes to occur, then we are interested in having strong internal validity. Essentially, we want to assess the proposition:
If X, then Y
or, in more colloquial terms:
If the program is given, then the outcome occurs
Unfortunately, it's not enough just to show that when the program or treatment occurs the expected outcome also happens. That's because there may be lots of reasons, other than the program, for why we observed the outcome. To really show that there is a causal relationship, we have to simultaneously address the two propositions:
If X, then Y
and
If not X, then not Y
Or, once again more colloquially:
If the program is given, then the outcome occurs
and
If the program is not given, then the outcome does not occur
If we are able to provide evidence for both of these propositions, then we've in effect isolated the program from all of the other potential causes of the outcome. We've shown that when the program is present the outcome occurs and when it's not present, the outcome doesn't occur. That points to the causal effectiveness of the program.
Survey Design
Many surveys are designed such that the length, content, or wording is not matched to the intended audience. A rule of thumb for all communications (and surveys are two-way communications after all) is: Audience + Purpose = Design. Determine, with as much confidence as possible, your audience and the purpose of your survey. Survey Design, which is comprised of questions, invitation format and interactivity, should be optimized for the audience and should focus on the defined purpose of your research.
Respondents prefer shorter surveys to longer ones.
- Keep questions clear and concise. Wordy or complex questions can confuse or turn off respondents.
- If the content is controversial or sensitive, be sure to check your questions and responses to ensure the respondents can answer them as comfortably as possible. This may require that the survey be confidential (identifying information is kept secret by the surveyor and never revealed to any other parties) or anonymous (identifying information is not collected—respondents can only be matched to a survey by a random number).
- Avoid use of technical wording, including jargon and acronyms. Acronyms should be expanded unless the target audience commonly knows them.
Questionnaire Design
Questionnaires are an inexpensive way to gather data from a potentially large number of respondents. Often they are the only feasible way to reach a number of reviewers large enough to allow statistically analysis of the results. A well-designed questionnaire that is used effectively can gather information on both the overall performance of the test system as well as information on specific components of the system. If the questionnaire includes demographic questions on the participants, they can be used to correlate performance and satisfaction with the test system among different groups of users.
It is important to remember that a questionnaire should be viewed as a multi-stage process beginning with definition of the aspects to be examined and ending with interpretation of the results. Every step needs to be designed carefully because the final results are only as good as the weakest link in the questionnaire process. Although questionnaires may be cheap to administer compared to other data collection methods, they are every bit as expensive in terms of design time and interpretation.
The steps required to design and administer a questionnaire include:
- Defining the Objectives of the survey
- Determining the Sampling Group
- Writing the Questionnaire
- Administering the Questionnaire
- Interpretation of the Results
Case Study
Search Engines comparison of www.hotbot.com and www.webcrawler.com
In order to compare two search engine, first we need to identify the objective of what we are expecting from the search engine. For this, we need to identify the role which we play while comparing the two search engine. For example, the role can be of Researcher, Student, Woman, Children etc. Once we have a clear objective and role in our mind, start comparing the search engine functionality and features according to the objective and role. For this purpose, we need to create a user survey which compare the two web sites side by side and ask question related to the features which you need to look for.
The variables for which we are conducting the survey can be the Ease of Use, information displayed, Result Fetch Time, Sponsored Links, Position of Sponsored Links, Result Relevancy, Search Categorization, Composite Search, # of Irrelevant Results, Search Preferences, Pre-filled Search Criteria, Offensive Content Filter, Extent of Customization etc. In order to compare these variables, we need to design a questionnaire which can be used to evaluate the comparison result. Our questions must support the objective of measuring these variables. We also need to select the appropriate scale for the questionnaire.
The scale used in the comparison of two web site (i.e. www.hotbot.com and www.webcrawler.com) was rating of different functionality or features from 0 to 5, where 0 is the absense of the feature, 1 is fo poor and 5 is for the good. This way, we designed a questionnaire and conduct a survey. The questionnaire, varaibles and result are documented in the following document.
Click here to download the Search Engine Comparison
Drawing a conclusion from the data
Related or Valuable Links:
Guideline for Experimental Design
http://helios.bto.ed.ac.uk/bto/statistics/tress2.html
Guideline for Survey Design
http://www.surveysystem.com/sdesign.htm
Guideline for Questionnaire Design
http://www.cc.gatech.edu/classes/cs6751_97_winter/Topics/quest-design/
A Comparison-based approach for software inspection
http://portal.acm.org/ft_gateway.cfm?id=781956&type=pdf&coll=&dl=ACM&CFID=15151515&CFTOKEN=6184618
|