This guide is meant as an introduction to assessment at the UNT Libraries with relevant resources for anyone interested in starting their own assessment project.
The following is a summary of notes from a guided discussion about survey development from the UNT Library Assessment Committee meeting on February 24, 2025.
Advice About Survey Question Development
Have a research question and write questions that you hope will answer that research question.
Keep an open mind – you will learn something unexpected from every survey.
You may need to throw out responses because they were not complete or because the respondents did not take the survey seriously. Record which responses you removed and why, particularly if you plan to publish on this data.
Check to see if there is a survey instrument that has already been developed. Often, they have extensive testing and validation that the creators have already done.
Have others proofread your survey – most screw-ups happen because of a rushed survey.
Consider and work with your audience:
Write for your audience. Make it easy to understand and try not to assume prior knowledge. Use the vernacular and try to avoid jargon.
Developing a survey instrument is a long and intensive process that often involves focus groups and interviews with the intended audience to find definitions, check understanding, learn how people think about a topic, and use statistical tools to narrow down the questions into those that are the most informative. This can be a good reason to use pre-built tools, especially as if you were to do this yourself in the academic library context, by the time you build the survey, you may have already exhausted your population.
If you are going to survey vulnerable populations, you need to understand that community and potential triggers that could come up for them.
Have your intended audience test your survey and ask them why they responded a certain way to questions.
Know what their tolerance is for survey length.
Know what motivates them to complete your survey. For example, Food for Thought (a pop-up survey of students) uses small, immediate food rewards (cookies) whereas surveys of librarians about prescient issues in the field might be motivating simply they are passionate about the topic.
Have a limited number of people on your survey writing team. It also helps if everyone on the team has shared expectations and shared definitions. If you are doing a survey on “interdisciplinary collaboration,” make sure that everyone has a shared definition of what that means.
Experiment with your survey software. Often there are different question types that can help ask your question in a more efficient, effective, or clear way.
Tell people what you are hoping to learn from the information you are asking for (such as demographic information).
Don’t ask for demographic information you don’t need.
Long vs. Short Surveys
Choose based on your audience
Depends on you research area and topic
Do you need validity questions for Cronbach’s alpha?
Short Surveys
Quick response, people just get them done
Able to do guerilla targeting
Less people quit midway through
Long Surveys
Can get nuance about things - can ask multiple questions about the same topic
Can do validation question
Sometimes need to be long to get the point
People can tune out towards the end
Go from most important to least important in your section order so that you can plan around that drop-off
Some long surveys go by quickly, and you are supposed to go through at speed
Tips for Success
Novelty helps get responses. Too many surveys wear people out.
Some universities limit the number of surveys sent to students – often to only one where each unit gets an allotment of questions. This improves response rates as it has a lot of university-wide resources behind it and helps to avoid overwhelm.
Random selection can also help for online surveys like MINES where only a small percentage of entries through the EZ Proxy Server trigger the survey.