What kind of survey method is used to defined questions and the possible answer using the internet?

Designing a Survey for Public Transport Users

Luigi dell’Olio, ... Rocio de Oña, in Public Transportation Quality of Service, 2018

4.3.3 Online Surveys

Online surveys done on the Internet and using smartphones are gaining in popularity because of advantages such as their lower cost and the widespread Internet access of the population. However, they do have certain disadvantages that need to be considered (see Table 4.3).

Table 4.3. Advantages and Disadvantages of Online Surveys

AdvantagesDisadvantages

Convenience: the respondent can fill in the questionnaire at their convenience. Given the widespread and increasing availability of smartphones, the public can also answer the questions at any location and at any time of day.

Visual supports: web based surveys allow the use of visual and audible aids which can help the respondent give reliable answers.

Speed: the information is directly collected digitally and the raw data is very quickly available for processing.

Cost/benefit: given that the entire process is more automatic and does not require the training of any interviewers, it is a much cheaper type of survey to conduct than intercept, telephone, and postal surveys.

Public transport users: if a sufficiently large number of public transport user email addresses are available, then the survey can quickly reach the correct sample size.

Lack of interviewer: this is one of the main disadvantages of this kind of survey. As there is no interviewer, the respondent can avoid certain questions, misunderstand them or superficially read the instructions for filling out the form. In this sense the collected data will probably be of a lower quality than the data coming from a survey using interviewers.

Limited access: given that to correctly participate in this survey requires a computer, telephone or Smartphone with internet connection, the survey may only access a determined socio-demographic user profile. However, this problem is increasingly losing importance as internet access is constantly on the rise.

Bias from auto selection: if only certain groups of individuals who are either more motivated or have greater internet skills take part in the survey then the sample could be biased limiting its representativeness.

The disadvantages of online surveys are, therefore, important and need to be thoroughly considered before choosing the final survey method to be used. Nevertheless, they are relatively simple to organize. It would be beneficial to send an initial e-mail to the target users to introduce the research, the basic instructions, and a link to the online survey form. The form should be presented with a clear and concise explanation of the goals of the study and instructions about how to fill it. The organizations taking part and financing the study should also be clearly visible to increase its credibility. The design of the form should not use too many colors and typefaces that may complicate its legibility and answering the questions. A second follow-up reminder email can be sent after a few days to increase the participation rate.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081020807000045

Survey designs

Kerry Tanner, in Research Methods (Second Edition), 2018

Sampling issues

Online surveys are not feasible for accessing the entire population. Their use is limited to those with email and internet access and those lacking such access, compared with those who have access, are likely, for instance, to have lower levels of education and income, lower rates of literacy and computer literacy, to be older and to over-represent certain ethnic groups. This inherent coverage bias is a major disadvantage. It is difficult to derive a scientific sample of the wider population for an online survey because there is no suitable sampling frame available. Sampling frames (e.g., email lists) are typically available only for closed populations or specialised target groups. The limited capacity for deriving a scientific sample of the broader population means that many web-based surveys draw on non-probability samples with their associated problem of self-selection bias. Research reporting can only make valid claims for the particular group of respondents and cannot generalise to the wider population. However, such non-probability samples are acceptable for exploratory research or as part of a multi-method or multi-mode survey approach (i.e., where supplementary means are used to gain access to sections of the population not represented in the sample).

Sue and Ritter (2007) provide a useful overview of sampling techniques for online surveys, identifying the following techniques. Saturation sampling is where all members on a particular e-list are invited to participate (i.e., a population census approach); here the key issues are minimising non-response bias and ensuring that each person can respond only once. There are some methods that attempt to derive an acceptable probability sample for an online survey, including: contact by telephone, possibly using random digit dialling, and inviting the person to log in to a website; using commercially available pre-recruited panels of individuals who have indicated their availability for repeated survey participation; and intercept sampling, where randomly or systematically programmed pop-up windows invite a site visitor to participate in a web survey. However, all of these methods have their limitations. Non-probability sampling approaches for an online survey include convenience sampling, volunteer opt-in panels and snowball sampling. Convenience sampling occurs where a survey is posted on a website and all visitors to that site are invited to respond, or when an invitation to participate is circulated via, for example, online lists or Twitter. Here respondents self-select, and there is no way to differentiate characteristics of respondents and non-respondents; also it is difficult to prevent multiple responses of one person. Volunteer opt-in panels involve the assembling of a group of people who are willing to participate in future online surveys, and collecting relevant demographics about them when they register. Selection of a subset of panel members for a future online survey is based on the desired demographics for the particular survey. Snowball sampling or referral sampling can be used in small specialist communities where people are known to each other.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081022207000066

Gathering Information From Public Participation Processes

Ted Grossardt, Keiron Bailey, in Transportation Planning and Public Participation, 2018

Online Participation

Online survey tools and services are an expected part of many public participation processes. They offer the appeal of apparently unlimited access, time efficiency, and seeming clarity. The Evaluative strategy is thus similar to those available with an ARS, except that qualitative comments cannot dynamically be evaluated by subsequent participants, due to the asynchronous nature of online surveys. Representation methods are limited to those amenable to a browser: text, still images, 3D simulations, and videos. Feedback Modeling can take the same qualities as ARS feedback and thus can be matched up with it. Hybrid approaches can be designed, where F2F ARS protocols are subsequently mounted on online surveys, to help improve Q, I, and E in a project.

However, there remain significant technology and knowledge barriers faced by many of the general public, and so online surveys may be beyond their capabilities to access. Groups characterized by low income, advanced age, and low education levels will not be penetrated by such tools, thus Inclusion is qualified. Efficiency for those who can access online surveys is unparalleled: they can access and complete the survey at their leisure, and so the systems can be very time-efficient for the user, and potentially for the project team, if enough participants are accessed in this manner. Moreover, one survey can be designed and packaged and delivered at different times, and via different web channels.

We welcome what seems a more rational discussion of online methods. After a period of a number of years working in this field (perhaps from the late 1990s to about 2010 or so) during which it seemed—certainly to us—that online participation was touted by partners, and proponents, and some public officials and agencies (POAs) and ProPs, as THE answer, and BETTER than F2F methods in the total absence of evidence of its quality and performance, its problematic aspects are now being increasingly addressed in research (Goldberg, 2010), and even in articles titled, e.g., “The dark side of online participation: exploring non-, passive, and negative participation” (Lutz and Hoffman, 2017). We have not found a quick and realistic one-page summary of the benefits and problems of online methods in transportation literature, but we point readers to the summary page of the Association of Biomolecular Research Facilities (2017) where scientists use online forums to share ideas and data. The caution about the lack of richness and reflexivity of online media compared with in-person groups is well-made, as is the need for “effective” online participation by which ABRF means “Taking time to communicate your ideas clearly makes them easier to find. Others are more likely to pay attention to you, and more likely to respond thoughtfully.” Whether users of transportation systems will take such time and communicate “Clearly” is open to question. We remind readers of the earlier discussion in Chapter 4, of the participation patterns evident in the online deliberative portal for Puget Sound transportation improvement as discussed by Nyerges et al. (2010) and the numerous similar findings by Beyers (2004) in his analysis of thousands of postings on a Dutch newspaper discussion forum. Beyers (2004, p.11) found that “discussions degenerate into a tug of war between a few very dominant participants on a regular basis.” Avoiding this problem by focusing online participation into nondeliberative channels, e.g., simple scoring or evaluation of statements or images or visualizations, and establishing time limits for inputs, are key design objectives to maximize value of online input.

Process Quality can be compromised, in that there is little transparency (results are not immediately available to participants, and so there can be the perception /suspicion of data manipulation). Similarly, because there is no opportunity to introduce new ideas that can then be immediately evaluated by the survey tool, Clarity of feedback is somewhat compromised. This is as distinct from an ARS-guided meeting where new ideas can be solicited, recorded, and evaluated in real time. Think about the limitations of online platform Q like this. We point readers to the obvious end-case scenario of an all-online election for President. Are you, the reader, ready for this? Do you have sufficient faith in an online platform to accept the results of such an online-only election? Or do you fear Russian or other troll farms spamming false votes into the counting systems and destroying the integrity of this fundamental democratic choice? We fear all these things and in any case, lack sufficient trust in online systems for decision-making of this type. The recent Facebook, Experian, IRS, and numerous other large-scale data hacks demonstrate how insecure the online environment is. It is comical to believe that somehow user information can be protected or safeguarded and - let us be realistic -everything is already on the dark web! So, then for high-stakes data like your Social Security Number, we bet you will consider very carefully whether you wish to enter this into an online platform.

Now scale the question down to voting on a bypass for your town: perhaps you still have reservations, and even if not, plenty of others do. So, we must be realistic and ensure that online participation is enacted within an information domain that is reasonable to the participants. We are realists; there are occasions when online participation is the only feasible option and we have designed such systems for many projects. But as a general principle, this is one reason we recommend using online systems to augment face to face participation methods and not, in general, to substitute directly for them or replace them totally.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128129562000050

International Librarianship Survey

Karen Bordonaro, in International Librarianship at Home and Abroad, 2017

Timeline and Distribution Channels

The online survey was initially opened at the end of May 2016 and sent to multiple librarian listservs, including IFLA (International Federation of Library Associations), ACRL (Association of College and Research Libraries) Academic Library Services to International Students, ACRL International Perspectives on Academic and Research Libraries, the Canadian Association of Research Libraries, Ontario Council of University Libraries, and American Library Association (ALA) International Relations Round Table members. In addition, it was forwarded to the International Association of School Librarians, the IFLA School Libraries Network, and the European Network for School Libraries and International Literacy. Librarians who initially received the survey request passed it on to interested colleagues and co-workers in their own libraries and further personal professional networks as well. The survey remained open through June, July, and August 2016 to accommodate respondents who potentially might be interested in a follow-up personal interview. Personal interviews options were made available for any interested librarian attending the ALA annual conference in Orlando, Florida, at the end of June 2016. For librarians interested in participating in an individual interview who were not attending ALA annual event, provisions were made to conduct interviews by phone or email later in the summer of 2016.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978008101896500003X

Recognizing intra-urban disparities in smart cities: An example from Poland

Piotr Maleszyk, in Smart Cities and the un SDGs, 2021

4.1 Research design and data

An online survey was conducted from September 23 to October 10, 2018, along with online voting on participatory budget projects. All voters were asked to fill out a short questionnaire in which they were asked to assess, among others, the quality of public urban amenities in their districts on an interval scale ranging from 1 (very bad) to 7 (very good), with an additional “I don’t know/not applicable” nonresponse item. The categories of public amenities included in the scope of the assessment were as follows: green areas, accessibility by car, street condition, pavement condition, availability of parking space, city cleanliness, accessibility by public transport, sport facilities and amenities, kindness of local residents, adaptation of public spaces for persons with disabilities, cycling lanes, availability of kindergartens, quality of education in primary schools, and children’s playgrounds. Additionally, participants were asked to assess the safety (separately during the day and at night) and air quality of the neighborhood as well as the kindness of local residents—three space-related categories that do not fall into the “public amenities” category, yet apply to the concept of “public goods” and significantly influence the well-being of urban residents.

The survey was completed by 5753 residents aged 15 and over, out of 19,809 voters in the same age group, which translates into a response rate of 32%—a very high percentage with respect to most online surveys. The numbers of participants exceeded 50 in 21 out of 27 districts, surpassed 100 in 18 districts. Overall, the participants accounted for 2% of the total population aged 15 and over. A summary of respondent statistics is presented in Table 1.

Table 1. Summary statistics.

Respondents Number%City population (aged 15 and above) %
Age
15–24496 8.6 10.3
25–341943 33.8 17.7
35–442136 37.1 19.2
45–54678 11.8 14.2
55–64327 5.7 15.7
65 and above173 3.0 22.9
Sex
Male2512 43.7 45.3
Female3197 55.6 54.7
No answer44 0.8
District
Abramowice (1)20 0.3 0.6a
Bronowice (2)419 7.3 4.5a
Czechów Południowy (3)273 4.7 7.5a
Czechów Północny (4)310 5.4 5.2a
Czuby Południowe (5)589 10.2 5.3a
Czuby Północne (6)543 9.4 8.5a
Dziesiąta (7)453 7.9 6.8a
Felin (8)168 2.9 2.2a
Głusk (9)39 0.7 0.6a
Hajdów-Zadębie (10)30 0.5 0.7a
Kalinowszczyzna (11)357 6.2 7.3a
Konstantynów (12)79 1.4 2.5a
Kośminek (13)126 2.2 4.0a
Ponikwoda (14)326 5.7 3.9a
Rury (15)382 6.6 9.4a
Sławin (16)245 4.3 3.4a
Sławinek (17)156 2.7 2.2a
Stare Miasto (18)16 0.3 0.7a
Szerokie (19)98 1.7 1.0a
Śródmieście (20)164 2.9 6.1a
Tatary (21)144 2.5 3.6a
Węglin Południowy (22)316 5.5 2.6a
Węglin Północny (23)99 1.7 1.2a
Wieniawa (24)149 2.6 3.8a
Wrotków (25)186 3.2 4.5a
Za Cukrownią (26)33 0.6 1.1a
Zemborzyce (27)33 0.6 0.9a
Total5753 100 100

aEstimates based on data on registered city population aged 18 and above.

One of the key issues with government-citizen communication enhanced by ICT is its nonrepresentativeness (Nam, 2012). In extreme cases, urban citizens are represented by a small group of urban activists willing to undertake time-consuming actions and whose ideas are often skewed and inadequately reflect the opinions and needs of the “silent majority” of urban residents. The common reality of public participation is that those who show up in open meetings either have negative views or are passionate about particular issues or values, while less assertive people are severely underrepresented (Haklay, Jankowski & Zwoliński, 2018). This also applies to cities in Poland (Pistelok & Martela, 2019), including Lublin. In this regard, many appealing high-tech solutions that allow for real-time collaboration, often labeled as “smart,” might actually fail to increase the number and representativeness of urban residents involved in citizen participation. In turn, recognizing intra-urban differences in neighborhood satisfaction requires maximizing the number of individuals expressing their opinion. In this context, providing a simple, intuitive, and inclusive tool was the optimal framework for our research.

Our questionnaire accompanied Lublin’s participatory budget voting, which is a highly popular citizen participation measure in Lublin, engaging on average 10%–15% of the city’s residents each year. Although not fully representative, it attracts a large pool of residents with above-average interest in various neighborhood-related matters. The involvement of individuals with high social capital is favorable for the reliability of the data obtained. The sample accurately reflects the gender distribution of the urban population. Furthermore, the survey ensured sufficient representation from all 27 districts, while differences in the absolute number of participants across districts derive largely from population differences rather than survey bias (see Table 1). In terms of age, groups aged 55 and above are underrepresented, as older people are less likely to vote via the Internet. Nevertheless, unlike many other internet tools (see, e.g., Czepkiewicz, Jankowski, & Młodkowski, 2017; Haklay et al., 2018), the middle-aged population was willing to fill out the questionnaire. One might consider applying correction techniques, yet we did not employ a weighting adjustment for two reasons. Firstly, detailed data on citizens’ age across districts come from the city registry and thus do not include a large number of unregistered residents who also voted on the participatory budget and filled out the online questionnaire, which could be a further source of bias. Secondly, assessments of local amenities are, according to this dataset, distinguished largely by the quality of the area rather than participants’ age.

Further analysis followed a two-step approach. First, we explore differences in neighborhood satisfaction with each type of local urban amenities and public goods across 27 districts by using classical position measures calculated as the ratio of the standard deviation to the mean. We use position measures as data for each district or assessed category show a skewed distribution. Afterward, we calculate aggregate indicators of neighborhood satisfaction across districts in Lublin. We present results obtained by applying equal weighting to the assessed categories, as using an uneven weighting strategy based on public opinion polling has yielded almost identical district hierarchies and distances between assessed units (Maleszyk, 2019).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323851510000117

Storm resilience and sustainability at the Port of Providence, USA

Richard Burroughs, Austin Becker, in Maritime Transport and Regional Sustainability, 2020

4 Survey

We conducted online surveys of port businesses to determine what actions had been taken at the firm level to create a more resilient and hence sustainable port. We were able to involve totally 17 private firms in our work. They included seven handling petroleum, five recycled metal, and four salt. In 2015 these categories of cargoes accounted for 90% of the cargo volume for the port (U.S. Army Corps of Engineers, 2016).

Eleven businesses reported owning their property and six reported their operations as independently operated. Seven businesses stated they have 1–19 employees, five businesses stated 20–99 employees, and two businesses reported over 100. Based on stakeholder responses total employment of the businesses surveyed is ~ 600 to as many as ~ 2000 workers.

Nine businesses have more than 100 unique customers (individual purchasers), while 12 stated 100 or more businesses rely on their services. This suggests a sizeable supply chain effect if port businesses were impacted, with port products reaching many customers and businesses throughout the hinterland.

Businesses require access to land and sea corridors to be effective participants in the supply chain (Fig. 1). Nine of our respondents to the initial survey depend on access to the shipping channel and of those six require the channel to be maintained as deep draft or 35 (10.7 m) to 40 feet (12.2 m). Seven require access to route I-95 and five the railroad line to move cargo into and out of the port area. Annual vessel calls per business range from 15 to 250 per year. At least one representative stated if the 40 ft channel were lowered (to 30 or 20 ft) business could be facilitated with smaller vessels, but at a higher cost to the business.

What kind of survey method is used to defined questions and the possible answer using the internet?

Fig. 1. This figure shows that 9 out of 15 businesses state that they could not do business without access to the 40-foot-deep shipping channel.

Storm preparedness can be measured by individual firm investments in physical reconfiguration at the business. Fig. 2 shows that many firms have backed up computer systems, installed emergency generators, and taken wind/flood proofing actions on site. Less common firms invest in raising electrical systems or moving to less flood-prone areas.

What kind of survey method is used to defined questions and the possible answer using the internet?

Fig. 2. Most business have backed up computer data, attended a meeting on hurricane preparedness, and identified an off-site location to store equipment or products; however, business has in general not created prestorm service agreements to facilitate rapid clean up and raised electrical systems above storm surge levels (~ 15 ft).

Planning by individual firms (Fig. 2) includes identifying offsite locations for equipment and cargoes, and developing hazardous material as well as business recovery plans. Only two firms have created structure stability analyses. Structural stability of piers must be maintained if cargo is to be handled after a storm. In addition firms have completed meetings, inundation maps, and prestorm contracting. In prestorm contracting waterfront businesses can identify debris removal and other needed activities in advance of the event.

Subsequent to the survey we completed a workshop to gain further insights on stakeholders that included businesses as well as government (Becker et al., 2017; Becker, 2017, see also www.portofprovidence resilience.org).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128191347000034

Planning

Tom Tullis, Bill Albert, in Measuring the User Experience (Second Edition), 2013

3.4.3 Online Surveys

Many UX researchers think of online surveys strictly for collecting data about preferences and attitudes, and firmly in the camp of market researchers. This is no longer the case. For example, many online survey tools allow you to include images, such as a prototype design, within the body of the survey. Including images within a survey will allow you to collect feedback on visual appeal, page layout, perceived ease of use, and likelihood to use, to name just a few metrics. We have found online surveys to be a quick and easy way to compare different types of visual designs, measure satisfaction with different web pages, and even preferences for various types of navigation schemes. As long as you don’t require your participants to interact with the product directly, an online survey may suit your needs.

The main drawback of online surveys is that the data received from each participant are somewhat limited, but that may be offset by the larger number of participants. So, depending on your goals, an online survey tool may be a viable option.

Interacting with Designs in an Online Survey

Some online survey tools let participants have some level of interaction with images. This is exciting because it means you can ask participants to click on different areas of a design that are most (or least) useful or where they would go to perform certain tasks. Figure 3.3 is an example of a click map generated from an online survey. It shows different places where participants clicked to begin a task. In addition to collecting data on images, you can also control the time images are displayed. This is very helpful in gathering first impressions of a design or testing whether they see certain visual elements (sometime referred to as a “blink test”).

What kind of survey method is used to defined questions and the possible answer using the internet?

Figure 3.3. An example of a click heat map created with the Qualtrics survey tool.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124157811000030

Risks, Opportunities, and Risky Opportunities: How Children Make Sense of the Online Environment

Leslie Haddon, Sonia Livingstone, in Cognitive Development in Digital Contexts, 2017

Aggressive Communication, Harassment, and Cyberbullying

While the EU Kids Online survey had concentrated on cyberbullying as a key risk, following the literature on this topic, the EU Kids Online European qualitative study broadened the focus to cover a range of negative forms of online interaction, including various forms of aggression (Černíková & Smahel, 2014). The rationale for this was that children would talk about these experiences of aggression, which were important to them, but they did not necessarily think about the experiences as constituting “cyberbullying.” For example, Mohammed was among a number of children who objected to others swearing at him and, like his European counterparts, he felt angry about this rather than upset.

Mohammed: It was from this kid on this game that I was playing… […] he was using bad language to me and towards my friends (…) so I reported him and we got him banned straightaway.

Interviewer: So he was on this chat thing you can do when you’re on games? Okay. Was it a surprise, or have you had things like that in the past?

Mohammed: Yes, it was a surprise to me because nobody had spoken to me like that. I wasn’t upset. I was just like angry because he was being rude and I hadn’t done anything to him.

(Boy, 10)

But also it is not just strangers who could be aggressive. Children can also be generally nasty and mean to each other (and in the examples below, threatening as well).

Jane: Or they’ll say ‘if you don’t BC someone I will haunt you’…and then they’ll say ‘They’ll be someone at the end of your bed and he will come and chop your head off’.

Linda: It's like on X-Factor [TV competition] …Jill wrote a BC…like if you don’t vote for XXXX then this little girl's gonna come to your bed and kill you…or something like that.

Jane: Or ‘You’ll have bad luck’.

Melanie: And the ‘You won’t get a boyfriend’ or something like that…how stupid!

(Girls, 12)

Sometimes the children felt this aggressive behavior occurred more online than offline precisely because it was not face-to-face, as when Josie (Girl, 12) noted: Because you don’t see the person's face, you don’t see the person's reaction, so you just…and you’re only typing…

The EU Kids Online survey showed that only 8% of UK children who go online had received nasty or hurtful messages online, slightly above the European average (Livingstone et al., 2010). But as with the other risk areas, once the children were asked in the qualitative research to comment on cases of which they knew, quite a few could give examples, including those they identified as a form of cyberbullying. Sometimes incidents that were perhaps more thoughtless than intentionally harmful got out of hand in the online world. For example, several interviewees noted how comments made online were sometimes taken the wrong way: “They don’t think that they’re saying anything mean but the other person finds it offensive” (Pamela, Girl, 15). Others observed how something that started out as teasing—either in terms of a comment made or a picture posted—could easily “escalate from being a joke to being quite abusive” (Nathaniel, Boy, 12).

The European qualitative report also considered one particular case of online aggressive behavior—breaking into people's accounts and pretending to be them, then making nasty comments about other people, or sending nasty messages to friends of the victim while pretending to be them. In the Net Children Go Mobile Survey, this was covered in the section on the misuse of personal data, where 9% of children had experienced someone breaking into their profile and pretending to be them. A few of those interviewed in the United Kingdom had experienced this themselves, or else knew people who had. This was indeed particularly awkward to deal with, as the victim had to try to repair the social damage by assuring friends that the message was not from them.

Overall, online aggression, wider and more common than cyberbullying, was experienced by a range of the children interviewed, and could come from peers as well as strangers. Several of those interviewed agreed with a theme from the cyberbullying literature that there might be more aggression online because of fewer inhibitions when social clues present in face-to-face action are removed. But it is equally clear that some incidents that do not start out as intentionally aggressive, and indeed may come from the teasing practices identified earlier, may escalate, reflecting previous research observations about the power of the Internet to amplify social dramas (boyd, 2010).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128094815000146

COVID-19 and the digital divide in higher education: A Commonwealth perspective

Lucy Shackleton, Rosanna Mann, in Libraries, Digital Information, and COVID, 2021

3 Survey design and methodology

In this context, the ACU conducted a digital engagement survey designed to better understand the short-term impacts of COVID-19 on staff, students, and university leaders across the Commonwealth in May 2020. The survey elicited 258 responses from 33 countries, with 66% from Africa, 21% from Asia, 4% from Europe, 4% from the Pacific, and 4% from the Americas and the Caribbean. 44% of respondents were academics, 25% were professional services staff, 17% were students, and 10% were senior leaders (deans or above).

As a self-selected online survey about digital engagement, there is inherent bias in the sample and caution should be taken in interpreting and extrapolating results. Given the survey design, tests for statistical significance were not undertaken and all findings are based on descriptive statistics.a Findings presented in this chapter are a subset of the full results which can be found online (see here: https://www.acu.ac.uk/media/2345/acu-digital-engagement-survey-detailed-results.pdf).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978032388493800015X

Creating user-centered spaces in academic libraries

Gail M. Staines, in Universal Design, 2012

Gathering input

Academic library construction is seldom successful if done in isolation. What librarians think users need and what they actually need are usually two different things. As such, it is important to seek input from students and faculty – the primary users of library space.

Several strategies, such as online surveys, focus groups and opportunities for in-person responses, can be implemented to obtain feedback from students and faculty. For example:

Create a brief (no more than ten-question) online survey using an electronic survey product such as www.surveymonkey.com, to ask questions in a variety of forms (e.g. multiple choice, ranking, Likert-type scale, etc.) including open- ended questions that allow written comments. Analyze survey results in terms of number of responses, percentages, written feedback themes and so on.

Conduct focus groups with students and with faculty. Schedule these during times when each group is available. Most focus groups run for no more than 90 minutes with a short break halfway through. You will need a person to facilitate group discussion and another individual to record the conversations. Conversations need not be recorded verbatim, but it is important to capture the main themes. A librarian or library staff member can learn how to facilitate a focus group as well as how to record the session. You can also check with faculty on campus as some professors have appropriate skills that they are more than willing to share. The focus group works best if offered in a comfortable environment with refreshments and snacks.

Create opportunities for student and faculty feedback by hosting information tables across campus, such as in the student union. Have flipcharts available for library users and non-users alike to share their comments and be on hand to explain what you are doing as well as to answer any questions.

One of the more creative ways of engaging students in the project design is to make concept design for the library a part of a course. Michelle Twait (2009) did just that when she created her “Library as Place” course. Throughout the course, students were required to design their favorite study space, tour libraries, participate in “visioning workshops” and then develop a library concept plan. Course outcomes were greater than expected. As Twait comments, “The students brought creativity and imagination to this project; questioned accepted practice and tradition; were able to see the library with fresh eyes; and saw only possibility and potential without being bogged down by budget constraints.”

Other feedback gathering strategies include asking a few students and faculty from different disciplines to serve on the library project planning team; and making presentations to both student and faculty groups including student government, faculty senate, department meetings, as well as in front of non-academic units such as business and finance, the president's executive committee and the institution's board of trustees. If your institution has very active alumni support, consider providing information sessions at alumni events such as homecoming.

It is important to obtain input on library renovation and new construction from those who will use it the most – students and faculty. Otherwise you run the risk of creating a space you think they need when, in fact, a different design could have been more effective.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978184334633350002X

What is internet survey method?

An online survey is a structured questionnaire that your target audience completes over the internet generally through a filling out a form. Online surveys can vary in length and format.

What are the 4 types of survey methods?

Types of surveys.
Online surveys: One of the most popular types is an online survey. ... .
Paper surveys: As the name suggests, this survey uses the traditional paper and pencil approach. ... .
Telephonic Surveys: Researchers conduct these over telephones. ... .
One-to-One interviews:.

What sampling method is an online survey?

Convenience sampling occurs where a survey is posted on a website and all visitors to that site are invited to respond, or when an invitation to participate is circulated via, for example, online lists or Twitter.

What method is used for surveys?

What are the different types of survey methods? The 10 most common survey methods are online surveys, in-person interviews, focus groups, panel sampling, telephone surveys, post-call surveys, mail-in surveys, pop-up surveys, mobile surveys, and kiosk surveys.