Geoff And Francis

Open Ended Questions – How to devise open ended questions in your survey questionnaire for PhD research

An open-ended question is an open question where the response is recorded verbatim. An open-ended question is nearly always an open question. (It would be wasteful to record yes-no answers verbatim.)Open-ended questions are also known as ‘unstructured’ or ‘free response’ questions. Open-ended questions are used for a number of reasons:

  • The researcher cannot predict what the responses might be, or it is dangerous to do so. Questions about what is liked and disliked about a product or service should always be open-ended, as it would be presumptuous to assume what people might like or dislike by having a list of pre-codes.
  • We wish to know the precise phraseology that people used to respond to the question. We may be able to predict the general sense of the response but wish to know the terminology that people use.
  • We may wish to quote some verbatim responses in the report or the presentation to illustrate something such as the strength of feeling that respondents feel. In response to the question ‘why will you not use that company again?’, a respondent may write in: ‘They were that awful. They mucked me for months, didn’t respond to my letters and when they did they could never get anything right. I shall never use them again.’ Had pre-codes been given on the questionnaire this might simply have been recorded as ‘poor service’.The verbatim response provides much richer information to the end-user of the research.
  • Through analysis on the verbatim responses, clients can determine if the customer is talking about a business process, a policy issue, a people issue (especially in service delivery surveys), etc. This enables them to determine the extent of any challenges they will face when reporting the findings of the survey to their management.

Common uses for open-ended questions include :

  • Likes and dislikes of a product, concept, advertisement, etc;
  • Spontaneous descriptions of product images;
  • Spontaneous descriptions of the content of advertisement;
  • Why certain actions were taken or not taken;
  • What improvements or changes respondents would like to see.

These are all directive questions, aimed at eliciting a specific type of response to a defined issue. In addition, non-directive questions can be asked, such as what, if anything, comes to mind when the respondent is shown a visual prompt, and whether there is anything else that the respondents want to say on the subject. Questions that ask ‘What?’ or ‘How?’, or for likes or dislikes, will commonly be open-ended.

Open-ended questions are easy to ask and suffer from several drawbacks:

  • In interviewer-administered surveys, they are subject to error in the way and detail with which the interviewer records the answer.
  • Respondents frequently find it difficult to both to recognize and to articulate how they feel. This is particularly true of negative feelings so that asking open-ended questions about what people dislike about something tends to generate a high level of ‘Nothing’ or ‘Don’t know’ responses.
  • Without the clue given for an answer list, respondents sometimes misunderstand the question or answer the question that they want to answer rather than the one on the questionnaire.
  • Analysing the responses can be a difficult, time consuming and relatively expensive process.

In addition, some commentators see the verbosity of respondents as a problem with open-ended questions. It is argued that if one respondent says only one thing that he or she likes about a product, but another says six things, then the latter respondent will be given six times the in the analysis. To even this up, only the first response of the more verbose respondent is counted. In practice, interviewers are trained to extract as much as details from the respondents at the open-ended questions. The objective is to identify the full range of responses given by all respondents and to determine the proportion of the sample that agrees with each of them.

To analyse the responses, a procedure known as ‘coding’ is used. Manual coding requires a sample of the answers to be examined and the answers to be grouped under commonly occurring themes, usually known as ‘code frame’. If the coder is someone other than the researcher, then that list of themes needs to be discussed with the researcher to see whether it meets the researcher’s needs. The coder may have grouped answers relating and to value for the money together as a single theme, but the researcher may see as distinct issues and want them separated. The researcher may be looking for specific responses for the specific responses to occur that have not arisen in the sample of answers listed. It may be important for the researcher to know that few people mentioned this, but in order to be sure that this is the case, the theme must be included in the code frame. When the list of themes is agreed, each theme is allocated a code, and all questionnaires are then inspected and coded according to the themes within each respondent’s answer.

Manual coding is a slow and labour-intensive activity, particularly when there is a large sample size and the questionnaires contain many open-ended questions. Most research agencies will include a limit to the open-ended questions in their quote for a project, because it is such a significant variable in the costing.

There are a number of computerized coding systems available, which are increasingly used by research companies. These reduce but do not eliminate the human input required, and so make some cost savings.

Leave a Reply

Your email address will not be published. Required fields are marked *