Introduction
Online tutorials provide an accessible way for students to develop skills and learn independently at their own pace. During the COVID-19 pandemic, as higher education institutions transitioned to online learning, creating online information literacy tutorials became both a timely and worthwhile endeavor. At Brandon University, a regional, primarily undergraduate Canadian institution, online tutorials would provide students with remote and asynchronous access to library instruction. With a predominantly commuter student body, the tutorials would continue to be a valuable asset post-pandemic, significantly contributing to the library’s instructional offerings.
In the summer of 2020, I created and revised four information literacy tutorials as part of a research study on engagement, learning, gamification, and user experience (UX). I selected gamification, “the use of game design elements in non-game contexts,” as a tool to increase student engagement with the tutorials and enhance student user’s experiences (Deterding et al., 2011, p. 10). I first tested non-gamified versions of the tutorials before incorporating gamification elements into subsequent versions.
I aimed to answer the following research questions about Brandon University students:
How does the student user’s experience of gamified tutorials compare to non-gamified tutorials?
What is the student user’s experience of online information literacy tutorials?
To assess participants’ experiences, I used two validated questionnaires: the User Experience Questionnaire Short Version (UEQ-S) (Schrepp, 2023, p. 2) and the Visual Aesthetics of Website Inventory: Short Version (VisAWI-S) (Moshagen & Thielsch, 2012), as well as a user survey I designed.
Literature Review
This literature review discusses aspects of UX, including usability and visual aesthetics. It focuses on the two validated questionnaires whose short form versions I used in this study: the UEQ and VisAWI. Additionally, it examines research on online library instructional resources and the use of the UEQ and VisAWI in this context.
Usability
The International Organization for Standardization defines usability as the “extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (2018, 3.1.1). Originally, companies designed usability questionnaires to analyze the potential benefits of new products or systems before investing time and money into their development (Chung & Sahari, 2015). Most usability questionnaires to assess how effectively users learn to use a system or to evaluate a product’s usability. Usability testing has historically focused on functionality, but researchers are increasingly recognizing the importance of including aesthetics and hedonic value when evaluating perceived usability (Chung & Sahari, 2015). However, Chung and Sahari analyzed and compared eleven usability questionnaires and observed that none considered aesthetics.
Extensive experience with interactive devices has led to heightened user expectations of intuitive interfaces and bold visual designs (Hinderks et al., 2018; Schrepp et al., 2014). Research has increasingly recognized that “user’s needs go beyond usability” and that the entire UX, including visual aesthetics and emotion, should be considered (Moshagen & Thielsch, 2010, p. 691; 2012).
User Experience
The International Organization for Standardization defines UX as a “user’s perceptions and responses that result from the use and/or anticipated use of a system, product or service” (2018, 3.2.3). UX holistically considers user’s reactions during product interactions and combines usability criteria with “hedonic quality criteria, like stimulation, fun-of-use, novelty, emotions or aesthetics” (Schrepp et al., 2014).
Questionnaires are a common, efficient, and inexpensive method for collecting user data (Schrepp et al., 2014; see also Díaz-Oreiro et. al., 2019; Hinderks et al., 2018; Laugwitz et al., 2008). Researchers can distribute questionnaires easily to large groups of users and allow participants to respond at their convenience (Díaz-Oreiro et. al, 2019; Hinderks et al., 2018). Standardized questionnaires are reliable, validated, quantitative measurements that allow for efficient data analysis (Díaz-Oreiro et al., 2019; Hinderks et al., 2018; Laugwitz et al., 2008).
UX questionnaires are a cost-effective way to determine and prioritize design issues (Lund, 2001). As their use has become more widespread, users are less likely to participate when questionnaires are lengthy and time-intensive (Hinderks et al., 2018). Additionally, users’ experiences are subjective and can vary extensively (Chung & Sahari, 2015; Hinderks et al., 2018; Schrepp et al., 2014). Although companies often overlook subjective reactions to product usability, these reactions are strongly correlated with aspects of UX, such as user behaviour and purchase decisions.
Díaz-Oreiro et al. (2019) analyzed 553 academic studies to examine the use of three standardized UX questionnaires: AttrakDiff, meCUE, and UEQ. The AttrakDiff questionnaire was the most frequently used, likely because it was developed first in 2003, followed by the UEQ in 2008 and meCUE in 2013. Although only twenty studies were conducted in North America, seven were recent, having been published in 2017 or 2018. This suggests an increased use of standardized UX questionnaires in North American research. While all three questionnaires saw increased usage, the UEQ rose more rapidly, suggesting that it was becoming the preferred questionnaire for researchers.
User Experience Questionnaire (UEQ)
In 2008, Laugwitz et al. developed the User Experience Questionnaire (UEQ) as a quick and comprehensive assessment tool that allows users “in a very simple and immediate way to express feelings, impressions, and attitudes that arise when experiencing the product under investigation” (p. 64; see also Schrepp et al., 2014). Since its inception, the UEQ has been widely used in many studies, and it has become “an established and frequently used questionnaire for the evaluation of the UX of interactive products” (Hinderks et al., 2018, p. 374). In their review of 553 standardized UX questionnaire studies, Díaz-Oreiro et al. (2019) found that 200 of these studies employed the UEQ.
The UEQ collects immediate and spontaneous feedback, rather than responses requiring “deeper rational analysis,” and is based on a theoretical research framework that “distinguishes between perceived ergonomic quality, perceived hedonic quality and perceived attractiveness of a product” (Laugwitz et al., 2008, p. 65). Ergonomic or pragmatic quality refers to task-oriented facets, while hedonic qualities refer to non-task-oriented facets such as product aesthetics, or whether a product is perceived to be boring or interesting (Díaz-Oreiro et al., 2019; Laugwitz et al., 2008). The UEQ evaluates both hard, pragmatic usability criteria, and soft, hedonic UX criteria, which are of “similar relevance for the end user” and have not been adequately represented in other UX questionnaires (Laugwitz et al., 2008, p. 73; Schrepp et al., 2014).
The UEQ is comprised of six scales (Hinderks et al., 2018, p. 374; see also Schrepp et al., 2014):
attractiveness, or “the overall impression of a product”
perspicuity, or “how easy it is to learn or understand”
efficiency, or whether users are able to “solve their tasks without unnecessary effort”
dependability, or how predictable or secure the user feels in the interaction
stimulation, or how “exciting and motivating” it is to use the product
and novelty, or how innovative or creative the product is
Validated standardized questionnaires often use “Likert scales or semantic differentials” (Díaz-Oreiro et al., 2019, p. 1), or “two terms that describe the opposite ends of a semantic dimension” (Schrepp et al., 2021, p. 1), to gather users’ perceptions of a product’s characteristics and its impact on the user (Laugwitz et al., 2008). The UEQ asks participants to rate two semantic differentials on opposing sides of a seven-point Likert scale (Hinderks et al., 2018; Schrepp et al., 2021). Users can complete this questionnaire in 3–5 minutes, and researchers can analyze the results using the free online handbooks and Excel sheets available through UEQ: User Experience Questionnaire (Schrepp et al., 2014). The questionnaire is available in over thirty languages, and native English speakers translated and tested the English version (Laugwitz et al., 2008).
Schrepp et al. (2014) examined the reliability and validity of the UEQ “in 11 usability tests with a total number of 144 participants and an online survey with 722 participants” (p. 3). The authors stated that the studies showed consistent reliability and good construct validity of the UEQ scales, as measured by Cronbach’s Alpha (Schrepp et al., 2014; Schrepp, 2023). Cronbach's Alpha measures “the consistency of a scale” and “indicates that all items in a scale measure a similar construct” (Schrepp, 2023, p. 10). In 2022, Schankin et al. published a paper evaluating the reliability and validity of the UEQ through two studies, noting that “only a few studies reported about its reliability and validity so far” (p. 2). They found that “the UEQ can measure hedonic and pragmatic quality aspects as well as an overall attraction of a product in a reliable and valid way” (p. 10).
Hinderks et al. (2018) created a short version of the UEQ, known as UEQ-S, to address scenarios in which completing the full questionnaire could be tedious and potentially impact the quality of responses. The perceived attractiveness criteria is not included in the UEQ-S, which is one of the two main item sets in the full version of the UEQ. The UEQ-S consists of eight items that equally measure pragmatic and hedonic qualities, with the total mean value used as a measure of the overall UX. However, researchers should interpret this value carefully, as the importance of pragmatic qualities in relation to hedonic qualities may differ in practice from the ratio used in the UEQ-S overall mean value (Schrepp, 2023). The authors conducted an evaluation study comparing the UEQ-S to the full UEQ to validate the short version (Schrepp et al., 2017). They found that “the consistency of the pragmatic quality and hedonic quality scales was reasonably high” and “seems to approximate the long version expectedly well” (p. 106).
Researchers have combined methods “in innovative ways to further our understanding of user experience” (O’Brien, 2016, p. 27). Díaz-Oreiro et al. (2019) found that 61.5 percent of the studies they reviewed employed more than one evaluation method. Similarly, Laugwitz et al. (2008) stated that “in general, user questionnaires have to be combined with other quality assessment methods to achieve interpretable results” (p. 63). For example, Aufderhaar et al. (2019) used both the UEQ and VisAWI-S questionnaires in their study of potential gender differences in users’ perceptions of three websites.
Visual Aesthetics
Aesthetics, or the importance of beauty, has been perceived and understood since antiquity (Lavie & Tractinsky, 2004). Scholars have debated whether aesthetics is an objective or subjective property, and current theories tend to include both perspectives. Psychologists and researchers suggest that people perceive objects as more beautiful when they are easier to mentally process (Moshagen & Thielsch, 2010). Research on the relationship between visual aesthetics and perceived usability has increased, with findings suggesting that visual aesthetics may improve perceived usability (Lavie & Tractinsky, 2004; Moshagen & Thielsch, 2010). While primarily an aesthetic attribute, simplicity is fundamental to “creating usable systems” and enhances the connection between usability and aesthetics (Lavie & Tractinsky, 2004, p. 277).
Evidence shows that beauty is a significant determinant of website preference, and aesthetics are crucial to user satisfaction (Lavie & Tractinsky, 2004). Although fundamentally important, aesthetic considerations are often overlooked or undervalued in studies of human-computer interaction. However, it is essential that visual aesthetics in these interactions are “adequately assessed” (Moshagen & Thielsch, 2012, p. 1305).
Visual Aesthetics of Website Inventory (VisAWI)
Moshagen and Thielsch (2010) created the VisAWI as a validated instrument for measuring perceived visual aesthetics of websites. They based their questionnaire construction on interviews with experts and users, aiming to make it applicable to both research and industry contexts. They created a model with four facets: simplicity, diversity, colorfulness, and craftsmanship. While these facets collectively represent perceived visual aesthetics, they “are still distinguishable from each other” and independently respond to changes in website design properties (p. 705). Three studies were conducted to determine the “various types of validity” of the VisAWI (p. 693). Overall, the evidence from these studies suggested that the VisAWI is a reliable measure “for a precise assessment of perceived visual aesthetics of websites” (p. 706).
A limitation of the VisAWI is that although it takes less than three minutes to complete, it may still be too lengthy in specific research settings (Moshagen & Thielsch, 2010, 2012). Moshagen and Thielsch (2012) constructed a shortened version of the VisAWI (VisAWI-S) by selecting one item that best represented each of the four facet: simplicity, diversity, colorfulness, and craftsmanship. While the VisAWI-S has lower reliability than the full version, it is “still sufficiently reliable for most purposes” and provides “a very close approximation to perceived visual aesthetics as measured by the full VisAWI” (p. 1310). Although both versions were originally created in German, the English version is considered “grammatically and semantically equivalent” (p. 1307).
Recently, researchers have adapted versions of the VisAWI in Arabic, Farsi, and Indonesian (Abbas et al., 2022; Saremi et al., 2023; Sadita et al., 2022). The Indonesian version examined “graphical interface aesthetics” in video games and found the VisAWI to be reliable and valid in this context (Sadita et al., 2022, p. 382). The VisAWI has also been recognized as a standardized UX questionnaire (Hinderks et al., 2019; Schrepp et al., 2021).
Online Library Instructional Resources
Online information literacy resources have expanded the range of library instruction methods available to students and become increasingly popular (Fernández-Ramos, 2019; Walters et al., 2014). With the growth of online and distance learning in higher education, online information literacy tutorials help to address these student needs (Blummer & Kritskaya, 2009; Bury & Oud, 2005; Fernández-Ramos, 2019; Su & Kuo, 2010; Zhang, 2006; see also Goodsett, 2020; Olsen & Harlow, 2022). Online tutorials are accessible anytime and anywhere, allowing students the flexibility to learn independently at their own pace and revisit the content as needed (Fernández-Ramos, 2019; Su & Kuo, 2010; Walters et al., 2014; Zhang, 2006; see also Bury & Oud, 2005). They can also be updated and revised over time, providing long-lasting value to students (Goodsett, 2020). As librarian capacity to provide in-person instruction is increasingly limited, online tutorials offer an effective time-saving alternative (Fernández-Ramos, 2019; see also Walters et al., 2014).
Online library tutorial interfaces must be well designed in order to sustain user interest and engagement (Zhang, 2006). During tutorial development, creators should identify objectives based on pedagogical principles or standards to ensure a focus on teaching and learning (Blummer & Kritskaya, 2009; Walters et al., 2014; Zhang, 2006). Effective tutorial designs consistently apply website design guidelines for elements such as text, headings, color, and visuals (Walters et al., 2014; Zhang, 2006). Franklin et al. also emphasized the importance of incorporating equity, diversity, and inclusion considerations when designing online learning tools to reflect the varied learning experiences of students (2021).
Text is a fundamental element of online instruction (Zhang, 2006). Designers can use text font, size, and color to emphasize specific words or sections. Research shows that limiting text to short paragraphs and minimizing content per page can improve readability. Applying color consistently reinforces the content’s framework, helping students understand the organization and relationships between content while effectively highlighting information.
Visual images can aid the learning process and transform “a boring, lifeless web tutorial into a set of compelling and interesting online instructions” (Zhang, 2006, p. 299). They can help to break up sections of static text and reduce the cognitive load for users. Images “should be carefully designed to capture students’ attention, orient them for learning, and encourage them to actively participate in the learning process” (p. 297). Using multimedia has become increasingly popular in online library tutorials as a method to engage and motivate students (Su & Kuo, 2010; Zhang, 2006; see also Olsen & Harlow, 2022). While videos can enhance learning, overuse can lead to internet bandwidth constraints for users (Su & Kuo, 2010; Zhang, 2006).
Immediate feedback is essential in online interactive tutorials, as it enables students to assess their progress (Zhang, 2006; see also Blummer & Kritskaya, 2009; Walters et al., 2014). Quizzes are an interactive and engaging method for providing feedback while evaluating students’ understanding of the tutorial content (Blummer & Kritskaya, 2009; Walters et al., 2014; see also Zhang, 2006). Multiple-choice questions are common, with research suggesting that “well-crafted multiple-choice questions can reliably and validly measure higher order thinking skills” (Goodsett, 2020, p. 4).
Practitioners can evaluate the usability of online library tutorials through pilot implementations, usability tests, surveys, and anecdotal observations, with user comments being “one of the easiest methods of evaluation” (Blummer & Kritskaya, 2009, p. 209). Student feedback “is commonly recognized to help instructors improve” and revise online information literacy tutorials (Su & Kuo, 2010, p. 327). A small sample size of users is sufficient for testing, as these users “are likely to encounter all of the most significant problems” (Bury & Oud, 2005, p. 56; Franklin et al., 2021). User feedback is invaluable for tutorial content design and can significantly improve student learning (Bury & Oud, 2005; Franklin et al., 2021).
UEQ and VisAWI Application in Libraries
Few studies have applied the UEQ or VisAWI in academic library research. Yi and Kim (2021) used the UEQ and the VisAWI to evaluate participants’ museum experiences, both in person and with wearable mixed reality technology. Saleh et al. (2022) employed the UEQ to evaluate the experiences of participants from several Jordanian academic institutions with the Moodle learning management system. Kuhar and Merčun (2022) used the UEQ-S, alongside an emotion-related questionnaire, to compare the UX of two digital libraries. They noted that hedonic qualities “have often been overlooked in the evaluation of digital libraries” and found that both questionnaires worked well in the digital libraries context (p. 3). They also observed that perceived aesthetics was “an aspect that was missing in the researcher’s interpretation of the results” (p. 10).
The studies above applied the UEQ, and in one instance the VisAWI, to evaluate the UX of a learning management system, mixed reality in museums, and digital libraries sites. However, there has been little to no use of the UEQ in academic library UX research. While studies of online library instruction tutorials have occurred, they have primarily focused on usability rather than the user’s experience. To date, no studies have employed the UEQ or VisAWI to measure the UX of online library instruction tutorials. Additionally, the UX of library tutorials created with LibWizard, a library-specific software tool, has not been studied in the current literature.
Method
I conducted this study at Brandon University, a small regional university with approximately 3,000 students. It consisted of two phases: a User Testing phase and an Implementation phase. During the User Testing phase, I tested the tutorials iteratively over two rounds, with participants completing two tutorials and corresponding questionnaires in each round. The first round took place between August 26 and September 6, 2020, and the second round between September 15 and 23, 2020. In the Implementation phase, I made the final gamified tutorials available to all university students during the fall 2020 semester. Students could choose to complete one to four tutorials with the associated questionnaires or finish the tutorial(s) without participating in the study. Each tutorial, including time for questionnaire completion, was expected to take 30–45 minutes. The study received approval from the Brandon University Research Ethics Committee.
In the User Testing phase, I recruited students through emails sent to the university student population, as well as targeted emails to teaching faculty who had previously expressed interest in the tutorials. While I included their qualitative feedback, I excluded faculty from the quantitative data analysis as none participated in both rounds. I also excluded incoming first-year students, as they were the primary audience for the Implementation phase. To incentivize participation, I offered students a ten-dollar gift card as compensation for each completed round.
For the Implementation phase, I recruited participants by sending emails to the university student population. Although these tutorials were particularly relevant to first-year students, all students were welcome to participate. The tutorials were also incorporated into a health studies course, where students received credit for completing the tutorials. Participation in the research study remained voluntary, as students could complete the tutorials without needing to submit the research questionnaires. A web page featured on the university library homepage also linked to the tutorials.
I based the learning objectives of the information literacy tutorials on the SCONUL Seven Pillars of Information Literacy model (SCONUL Working Group on Information Literacy, 2011). The intended users of the tutorials were university undergraduate students, particularly first-year students. The LibWizard program displays questions alongside embedded content, so students can actively apply what they have learned and interact with the tutorials (Cano, 2017). I incorporated multiple-choice questions throughout the tutorials and designed the layout to be clear and easy to follow. Participants could progress forward in the tutorials only after correctly answering the required questions on the page. The text was consistently black and formatted in 20-point Arial font, with bolding used to emphasize key words and phrases. The overall backgrounds of the tutorials were differentiated by color for aesthetic purposes and were not fundamental to the tutorial design. I employed visuals when relevant to the learning content.
The first round of the User Testing phase assessed the UX of non-gamified tutorials (see example in fig. 1). I used participant’s responses to the user survey to clarify, simplify, and enhance the design of the second round tutorials. I also added gamification elements to the second versions of the tutorials, including a fictional librarian guide, a student character needing help with a particular information literacy problem, themed tutorial topics, points, and badges in the form of certificates of completion (see example in fig. 2).
I incorporated a fictional character, Libby the Librarian, into all of the tutorials to provide immediate feedback on student answers (see fig. 3). Each of the four tutorials had a unique research topic: privacy and security in the digital age (Identifying Resources), outer space and space exploration (Searching for Resources), tropical rainforest conservation (Evaluating Resources) (see fig. 4), and various monarchies throughout history (Citing Resources). They also featured their own fictional student character who needed help with a specific information literacy skill (see fig. 4). These student characters reflected the diverse international student population at Brandon University.
During the Implementation phase, students were assigned to faculty teams based on the degree programs they indicated in response to the question: Which faculty is your current degree program? Points were then attributed to the corresponding faculty or school teams. Students earned points for correctly answering questions on their first attempt, and bonus points were awarded for questions where participants applied their learning to assist the student character. A leaderboard tracked faculty team progress as an additional competitive gamification element. Although a trophy for the winning faculty team was originally planned, it was canceled due to the COVID-19 pandemic.
I measured UX using two validated questionnaires: UEQ-S and VisAWI-S (see Appendix). I chose the short form versions for their brevity and validity to maximize participation while using multiple questionnaires. Because the UEQ-S does not include a measurement of perceived attractiveness, one of two sets of items in the full UEQ, I chose to use the VisAWI-S as an additional questionnaire to assess the perceived attractiveness of the tutorials. I also designed a user survey to collect participant comments about the tutorials. I integrated these questionnaires and user survey at the end of the tutorials.
Results
Participants
In the User Testing phase, some students who expressed interest in participating did not complete their assigned tutorials, resulting in an unequal distribution of participants across the tutorials (see Table 1). I excluded participants who did not participate in the second round from the quantitative data analysis. In the Implementation phase, some participants repeated tutorials to improve their grades, so the total number of participants was unknown.
Number of participants and tutorials completed in User Testing and Implementation phases.
Identifying Resources |
Searching for Resources |
Evaluating Resources |
Citing Resources |
Total |
|
---|---|---|---|---|---|
User Testing # of Participants (Completed both rounds) |
9 |
11 |
11 |
4 |
32 |
Implementation # of Tutorials Completed |
47 |
32 |
21 |
24 |
124 |
User Testing Phase: User Survey Feedback
Participants gave qualitative feedback on the tutorial design and their experience with the tutorials, including specific errors and issues, through a user survey.
The user survey questions that received relevant responses are:
What did you like the most about taking this tutorial?
What did you like the least about taking this tutorial?
Was there anything you preferred or missed from the previous version of this tutorial?
Do you have any additional comments you would like to make about this tutorial?
I anticipated that embedding YouTube videos in the tutorials would offer a welcome break from reading the textual content. However, some users found watching multiple videos time consuming and suggested adding closed captions to improve accessibility. In response, I reduced the number of videos and added instructions for enabling closed captions to the remaining videos. One participant in the second round provided positive feedback to the closed captions change.
The first round tutorials contained pop culture references as answer options for multiple choice questions. These inclusions received negative feedback, primarily from faculty members. After removing most of the references, one participant remarked that they missed the amusing pop culture answers.
Although the first version of the Evaluating Resources tutorial was considered informative and interactive, three participants mentioned that some sections were too easy or “not very complicated.” Participants also found the article at the end of the tutorial, to which they applied their evaluating skills, too lengthy. I removed this article and embedded articles throughout instead. Participants found the second version “more enjoyable to complete,” with improved pacing.
The first version of the Searching for Resources tutorial had multiple issues. Several databases could not be embedded into the tutorial, so students needed to open them individually. One participant commented that “the opening and closing of tabs got tiring.” These links sometimes opened in the same tab as the tutorial, forcing participants to restart the tutorial from the beginning. Participants found the instructions complicated and hard to follow, and they struggled with using specific subject filters and finding the correct answers in their searches. In the second version of the tutorial, I minimized the use of database links, clarified instructions, and added more instructional screenshots to demonstrate tasks such as locating subject filters. Participants described the second version as “a nice improvement” and noted that “the changes that were implemented helped a lot.”
Participants in the first round of the Citing Resources tutorial appreciated its step-by-step approach and “good, relatable examples throughout,” but they had difficulty focusing on the “very dense material” and found the amount of reading time-intensive. In the second version, I spread the text across more pages, with each page focusing on a specific citation style. Although feedback still mentioned the dryness of the tutorial content and that “it took quite a bit of time” to complete, participants felt the second version was “definitely an improvement over the old tutorial.” One participant commented, “The information was excellent. I wish I could print it out as a manual.”
User Testing Phase: UEQ-S
I analyzed the results of the UEQ-S using the UEQ handbooks and Excel sheets, which are freely available through the UEQ: User Experience Questionnaire. The UEQ-S item values range from -3, the most negative, to +3, the most positive (Schrepp, 2023). For the UEQ-S data analysis, mean values greater than +0.8 indicate a positive evaluation, values between -0.8 and +0.8 indicate a neutral evaluation, and values less than -0.8 indicate a negative evaluation. In the following tables, the items represent both pragmatic and hedonic qualities of the user’s experience, with overall mean values calculated for each. The overall UEQ-S mean value of the eight items represents the overall UX (Hinderks et al., 2018).
In the User Testing phase, I conducted paired t-tests to determine whether the differences between versions of the same tutorial were statistically significant. I found statistically significant differences for the Searching for Resources and Evaluating Resources tutorials, but no statistically significant differences between the Identifying Resources and Citing Resources tutorial versions.
In the first version of the Searching for Resources tutorial, all of the item mean values fell within the neutral range (see Table 2). The second version showed considerable improvement, with six of the eight items having mean values in the positive range. For the Evaluating Resources tutorial, the first version had only one item, clear, with a mean score in the positive range. In the second version, all items improved except for clear, which stayed in the positive range. Notable improvements occurred in the pragmatic qualities of supportive and easy, as well as in the hedonic qualities of exciting and interesting.
User Testing phase—UEQ-S item mean values.
Tutorials |
Identifying Resources |
Searching for Resources |
Evaluating Resources |
Citing Resources |
|||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Round |
1 |
2 |
1 |
2 |
1 |
2 |
1 |
2 |
|||
Item |
Quality |
Negative |
Positive |
UEQ-S Item Mean Values |
|||||||
1 |
Pragmatic |
Obstructive |
Supportive |
n +0.5 |
+1.7 |
n +0.1 |
+1.3 |
n +0.4 |
+1.6 |
+2.3 |
+2.3 |
2 |
Pragmatic |
Complicated |
Easy |
+1.5 |
+1.4 |
n -0.2 |
+1.6 |
n +0.5 |
+1.5 |
+1.0 |
+1.5 |
3 |
Pragmatic |
Inefficient |
Efficient |
+1.9 |
+0.9 |
n -0.4 |
+1.4 |
n +0.3 |
n +0.7 |
n +0.8 |
+1.0 |
4 |
Pragmatic |
Confusing |
Clear |
+2.1 |
+1.4 |
n -0.5 |
+1.4 |
+1.4 |
+1.0 |
+1.3 |
+1.0 |
5 |
Hedonic |
Boring |
Exciting |
+1.2 |
+1.2 |
n -0.1 |
+1.2 |
n -0.1 |
+1.0 |
n +0.5 |
n +0.8 |
6 |
Hedonic |
Not interesting |
Interesting |
+1.3 |
+1.3 |
n +0.4 |
+1.5 |
n +0.2 |
+1.2 |
+2.0 |
+1.0 |
7 |
Hedonic |
Conventional |
Inventive |
n +0.6 |
+1.4 |
n -0.2 |
n +0.3 |
n -0.7 |
n +0.2 |
-0.8 |
n +0.3 |
8 |
Hedonic |
Usual |
Leading edge |
+0.8 |
+0.8 |
n +0.6 |
n +0.7 |
n +0.2 |
n +0.8 |
n +0.3 |
n +0.5 |
n = neutral
Note: Since I rounded values to one decimal place, the same number may be neutral or positive. For example, some 0.8 values are neutral when the original value was below 0.80 and rounded up, while others are positive when the original value was above 0.80 and rounded down.
All of the second tutorials had positive overall UEQ-S mean value results above 1 (see Table 3). The Identifying Resources tutorial had the highest overall UEQ-S mean values in each round, with participants rating both versions positively for pragmatic and hedonic qualities. The Citing Resources tutorial also garnered positive overall UEQ-S values for both tutorial versions. The Searching for Resources tutorial showed significant improvement, with its overall UEQ-S value increasing by 1.227. The Evaluating Resources tutorials also improved, with its overall UEQ-S value increasing by 0.727. None of the tutorial versions in the User Testing phase had any item or overall mean values in the negative range.
User Testing phase—UEQ-S pragmatic, hedonic, and overall mean values.
Tutorial |
Identifying Resources |
Searching for Resources |
Evaluating Resources |
Citing Resources |
||||
---|---|---|---|---|---|---|---|---|
Round |
1 |
2 |
1 |
2 |
1 |
2 |
1 |
2 |
Pragmatic Quality |
+1.500 |
+1.361 |
n -0.250 |
+1.432 |
n +0.659 |
+1.205 |
+1.313 |
+1.438 |
Hedonic Quality |
+0.972 |
+1.167 |
n +0.159 |
+0.932 |
n -0.091 |
+0.818 |
n +0.500 |
n +0.625 |
Overall UEQ-S Value |
+1.236 |
+1.264 |
n -0.045 |
+1.182 |
n +0.284 |
+1.011 |
+0.906 |
+1.031 |
n = neutral
User Testing Phase: VisAWI-S
The VisAWI-S has four questionnaire items, each representing one of the four facets: simplicity (everything goes together in this tutorial), diversity (the layout is pleasantly varied), craftsmanship (the layout appears professionally designed), and colorfulness (the color composition is attractive). The overall mean value can represent “a general factor of aesthetics,” but since there are only four items, they cannot be used to individually represent the facets (Thielsch & Moshagen, 2015, p. 15). Each item of the VisAWI-S is rated on a seven-point Likert scale, with 1 indicating strongly disagree, and 7 indicating strongly agree (Moshagen & Thielsch, 2012).
According to the VisAWI-S questionnaire, participants in the User Testing phase perceived an improvement in the visual aesthetics of each tutorial (see Table 4). The second versions of all the tutorials had VisAWI-S mean values exceeding 6, indicating that the aesthetics were generally pleasing, with notable improvements in the Searching for Resources and Evaluating Resources tutorials.
User Testing phase—VisAWI-S overall mean values comparison.
Tutorial |
Round 1 |
Round 2 |
---|---|---|
Identifying Resources |
5.98 |
6.15 |
Searching for Resources |
5.05 |
6.09 |
Evaluating Resources |
5.32 |
6.11 |
Citing Resources |
6.00 |
6.38 |
Implementation Phase: UEQ-S
In this phase, participants completed the final versions of the tutorials, which I had slightly modified following the completion of the User Testing phase. A total of 124 tutorials were completed, with the numbers for each of the four tutorials shown in Table 1. The items supportive and leading edge received positive mean values in all of the Implementation phase tutorials (see Table 5). Three of the four tutorials had positive mean values for interesting, while the mean values for exciting were neutral across all tutorials. Inventive received the lowest mean values overall, with the Citing Resources tutorial having the only negative item mean value in the study (-1.3).
Implementation phase—UEQ-S item mean values tutorials comparison.
Item |
Quality |
Negative |
Positive |
Identifying Resources Tutorial |
Searching for Resources Tutorial |
Evaluating Resources Tutorial |
Citing Resources Tutorial |
---|---|---|---|---|---|---|---|
1 |
Pragmatic |
Obstructive |
Supportive |
+2.0 |
+1.3 |
+1.4 |
+1.2 |
2 |
Pragmatic |
Complicated |
Easy |
+2.0 |
+1.3 |
+0.9 |
n +0.8 |
3 |
Pragmatic |
Inefficient |
Efficient |
+1.0 |
n +0.2 |
n +0.2 |
n +0.2 |
4 |
Pragmatic |
Confusing |
Clear |
+1.3 |
n +0.1 |
n -0.1 |
n -0.2 |
5 |
Hedonic |
Boring |
Exciting |
n +0.7 |
n +0.2 |
n +0.6 |
n +0.2 |
6 |
Hedonic |
Not interesting |
Interesting |
+1.1 |
+1.0 |
n +0.7 |
+0.8 |
7 |
Hedonic |
Conventional |
Inventive |
n -0.7 |
n -0.5 |
n -0.6 |
-1.3 |
8 |
Hedonic |
Usual |
Leading edge |
+1.1 |
+0.8 |
+0.9 |
+0.9 |
n=neutral
In the Implementation phase, the UEQ-S overall mean value for the Identifying Resources tutorial was the most positive by a substantial margin (see Table 6). For the Identifying Resources tutorial, participants rated the pragmatic qualities positively, while they rated the hedonic qualities neutrally. The Searching for Resources, Evaluating Resources, and Citing Resources tutorials all received neutral scores for both pragmatic and hedonic qualities. These three tutorials also had neutral overall UEQ-S mean values, suggesting neutral user experiences.
Implementation phase—UEQ-S overall mean values.
Tutorial |
Identifying Resources |
Searching for Resources |
Evaluating Resources |
Citing Resources |
---|---|---|---|---|
Pragmatic Quality |
+1.574 |
n +0.703 |
n +0.583 |
n +0.490 |
Hedonic Quality |
n +0.537 |
n +0.359 |
n +0.393 |
n +0.146 |
Overall UEQ-S Value |
+1.056 |
n +0.531 |
n +0.488 |
n +0.318 |
n=neutral
Implementation Phase: VisAWI-S Results
The VisAWI-S overall values for the tutorials in the Implementation phase ranged from 5.78 to 6.22 (see Table 7). The VisAWI-S mean values were above 5.7, indicating that the aesthetics were pleasing overall, with the Identifying Resources tutorial having the highest visual aesthetics value.
Implementation phase—VisAWI-S overall mean values.
Tutorial |
VisAWI-S Overall Mean Value |
---|---|
Identifying Resources |
6.22 |
Searching for Resources |
5.78 |
Evaluating Resources |
5.84 |
Citing Resources |
5.90 |
Discussion
Overall, the UEQ-S and VisAWI-S results show that the UX of all four information literacy tutorials improved over the two rounds of the User Testing phase. The second round versions of all tutorials received positive overall UEQ-S mean values, indicating a positive user experience. The Searching for Resources tutorial showed the most substantial improvement between rounds. Creating this tutorial was more challenging than anticipated, due to difficulties embedding specific subject databases and frequent search result fluctuations. I addressed these issues in the second version of this tutorial, which had a higher overall UEQ-S value, demonstrating statistically significant improvements.
The Citing Resources tutorial covered the most challenging content, which may have affected the retention of participants assigned to this tutorial in the User Testing phase. Schrepp et al. (2014) noted that the majority of their UEQ evaluation samples ranged from eleven to twenty participants, and that samples with fewer than ten participants provide limited information. Both the Identifying Resources and Citing Resources tutorials had fewer than ten participants and did not show statistically significant UEQ-S improvements in the User Testing phase.
When tested with a larger participant pool in the Implementation phase, three of the four tutorials had neutral overall UEQ-S mean values, indicating neutral user experiences. The pragmatic and hedonic qualities for these three tutorials were also in the neutral range. In contrast, the Identifying Resources tutorial had a positive value for pragmatic quality and a positive overall UEQ-S mean value, suggesting a positive user experience.
The item mean values for inventive were consistently below zero, with the value for the Citing Resources tutorial being the only negative mean value in this study (-1.3). This may be due to the challenge of being inventive with information literacy content for a university student population, who may have higher UX expectations because of their exposure to numerous digital applications and video games. Implementation phase participants may have been less invested in completing the user survey and questionnaire, in addition to the information literacy tutorial questions, than the compensated volunteers in the User Testing phase. In contrast, the item mean values for leading edge were positive for all Implementation tutorial versions, indicating that participants in this phase regarded the tutorials as progressive for an academic library audience.
In both phases, the tutorials had high overall VisAWI-S mean values, reflecting that participants’ visual aesthetics experiences were consistently positive. This suggests that the visual aesthetics were a sufficiently pleasing aspect of the tutorials. The VisAWI manual recommends a minimum of twenty participants, though for developmental user testing, “the sample size may be below twenty” (Thielsch & Moshagen, 2015, p. 15).
The UEQ-S and VisAWI-S were both easy-to-use tools for measuring the UX of the tutorials. The UEQ-S is a valuable instrument for comparing users’ experiences of different tutorial versions. The VisAWI-S questionnaire generated positive results for all tutorial versions in both phases, indicating that participants perceived the visual aesthetics as successful. It effectively complemented the UEQ-S and substituted for the perceived attractiveness criteria from the UEQ by evaluating the visual usability of the tutorials. Some user survey responses in the User Testing phase helped identify specific usability issues and contributed to tutorial improvements. Together, the three questionnaires provided complementary UX feedback at various levels of specificity.
LibWizard is a valuable and user-friendly tool for creating online tutorials. However, a design limitation of the program is that questions can only be placed on the left side of the display. As English readers read left to right, it would be more natural for the content to be on the left, with the corresponding questions positioned on the right side.
Limitations
In the User Testing phase, I incorporated both user feedback and gamification elements into the second versions of the tutorials. Since I did not test these two variables separately, it is not possible to determine whether one or both factors were responsible for improving the UX. However, using both user feedback and gamification elements successfully improved the UX in this study.
Both rounds in the User Testing phase involved the same participants completing new versions of the same tutorials, so having completed a previous version for comparison and familiarity with the content may have influenced their responses in the second round.
The majority of participants in the Implementation phase were health studies students completing the tutorials for course credit, which may have influenced their motivation to complete the tutorials and participate in the research study.
The Implementation phase recruited fewer participants than expected. Recruitment was affected by unexpected delays and challenges caused by the COVID-19 pandemic, including the overwhelming volume of pandemic-related emails sent to both students and faculty. The number of recruitment and promotional emails for this study was limited, which may have contributed to a lower number of participants.
Conclusion
I used the UEQ-S and VisAWI-S to measure the UX of four online information literacy tutorials developed using LibWizard. I compared non-gamified tutorials with revised versions that incorporated user feedback and gamification elements. The second versions provided a more positive experience for users. With a larger participant pool, overall UEQ-S mean values indicated neutral user experiences for three tutorials, while one showed a positive user experience. Given the academic content of these tutorials and the extensive digital technology experiences of current university students, a neutral UX is acceptable. The visual aesthetics of the tutorials were consistently high, as measured with the VisAWI-S.
This study illustrates the value of both the UEQ-S and the VisAWI-S in developing information literacy tutorials that promote positive UX. I collected both qualitative and quantitative data to gain deeper insights into the user’s experience and identify areas for improvement. User feedback during the design process provided insight into specific modifications to improve the UX. The UEQ-S values also helped determine tutorial weaknesses in the first round of the User Testing phase, such as poor participant ratings for clear and interesting. I recommend librarians conduct some form of user testing when developing online library instructional materials, as even feedback from a small number of participants can benefit the development of online resources.
This study presents research on the use of the UEQ and VisAWI to measure the UX of online library tools or information literacy instruction. Further studies are needed to explore the application of the UEQ and VisAWI to measure user experience in academic library contexts.
References
Abbas, A., Hirschfeld, G., & Thielsch, M. T. (2022). An Arabic version of the Visual Aesthetics of Websites Inventory (AR-VisAWI): Translation and psychometric properties. International Journal of Human–Computer Interaction, 39(14), 2785–2795. https://doi.org/10.1080/10447318.2022.2085409https://doi.org/10.1080/10447318.2022.2085409
Aufderhaar, K., Schrepp, M., & Thomaschewski, J. (2019). Do women and men perceive user experience differently? International Journal of Interactive Multimedia and Artificial Intelligence, 5(6), 63–67. https://doi.org/10.9781/ijimai.2019.03.005https://doi.org/10.9781/ijimai.2019.03.005
Blummer, B. A., & Kritskaya, O. (2009). Best practices for creating an online tutorial: A literature review. Journal of Web Librarianship, 3(3), 199–216. https://doi.org/10.1080/19322900903050799https://doi.org/10.1080/19322900903050799
Bury, S., & Oud, J. (2005). Usability testing of an online information literacy tutorial. Reference Services Review, 33(1), 54–65. https://doi.org/10.1108/00907320510581388https://doi.org/10.1108/00907320510581388
Cano, A. J. (2017). Using LibWizard to create active virtual learning. Brick & Click 2017, Maryville, Missouri, 9–14. https://digitalcommons.xula.edu/cgi/viewcontent.cgi?article=1003&context=fac_pub#page=17https://digitalcommons.xula.edu/cgi/viewcontent.cgi?article=1003&context=fac_pub#page=17
Chung, T. K., & Sahari, N. (2015). Utilitarian or experiential? An analysis of usability questionnaires. International Journal of Computer Theory and Engineering, 7(2), 167–171. https://doi.org/10.7763/IJCTE.2015.V7.950https://doi.org/10.7763/IJCTE.2015.V7.950
Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: Defining “gamification.” In A. Lugmayr, H. Franssila, C., Safran, & I. Hammouda (Eds.), Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, (pp. 9–15). https://doi.org/10.1145/2181037.2181040https://doi.org/10.1145/2181037.2181040
Díaz-Oreiro, I., López, G., Quesada, L., & Guerrero, L. A. (2019). Standardized questionnaires for user experience evaluation: A systematic literature review. Multidisciplinary Digital Publishing Institute Proceedings 31(1), 1–12. https://doi.org/10.3390/proceedings2019031014https://doi.org/10.3390/proceedings2019031014
Fernández-Ramos, A. (2019). Online information literacy instruction in Mexican university libraries: The librarians’ point of view. The Journal of Academic Librarianship, 45(3), 242–251. https://doi.org/10.1016/j.acalib.2019.03.008https://doi.org/10.1016/j.acalib.2019.03.008
Franklin, K. Y., Faulkner, K., Ford-Baxter, T., & Fu, S. (2021). Redesigning an online information literacy tutorial for first-year undergraduate instruction. The Journal of Academic Librarianship, 47(1), 102277. https://doi.org/10.1016/j.acalib.2020.102277https://doi.org/10.1016/j.acalib.2020.102277
Goodsett, M. (2020). Best practices for teaching and assessing critical thinking in information literacy online learning objects. The Journal of Academic Librarianship, 46(5), 1–5. https://doi.org/10.1016/j.acalib.2020.102163https://doi.org/10.1016/j.acalib.2020.102163
Hinderks, A., Schrepp, M., & Thomaschewski, J. (2018). A benchmark for the short version of the User Experience Questionnaire. In M. J. Escalona, F. D. Mayo, T. Majchrzak, & V. Monfort (Eds.), Proceedings of the 14th International Conference on Web Information Systems and Technologies, (pp. 373–377). https://doi.org/10.5220/0007188303730377https://doi.org/10.5220/0007188303730377
Hinderks, A., Schrepp, M., Mayo, F. J. D., Escalona, M. J., & Thomaschewski, J. (2019). Developing a UX KPI based on the User Experience Questionnaire. Computer Standards & Interfaces, 65, 38–44. https://doi.org/10.1016/j.csi.2019.01.007https://doi.org/10.1016/j.csi.2019.01.007
International Organization for Standardization. (2018). Ergonomics of human-system interaction—Part 11: Usability: Definitions and concepts (ISO Standard No. 9241-11:2018). https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:enhttps://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:en
Kuhar, M., & Merčun, T. (2022). Exploring user experience in digital libraries through questionnaire and eye-tracking data. Library & Information Science Research, 44(3), 1–11. https://doi.org/10.1016/j.lisr.2022.101175https://doi.org/10.1016/j.lisr.2022.101175
Laugwitz, B., Held, T., & Schrepp, M. (2008). Construction and evaluation of a user experience questionnaire. In A. Holzinger (Ed.), HCI and usability for education and work, (pp. 63–76). https://doi.org/10.1007/978-3-540-89350-9_6https://doi.org/10.1007/978-3-540-89350-9_6
Lavie, T., & Tractinsky, N. (2004). Assessing dimensions of perceived visual aesthetics of web sites. International journal of human-computer studies, 60(3), 269-298.
Lund, A. M. (2001). Measuring usability with the use questionnaire. Usability Interface, 8(2), 3–6.
Moshagen, M., & Thielsch, M. T. (2010). Facets of visual aesthetics. International Journal of Human-Computer Studies, 68(10), 689–709. https://doi.org/10.1016/j.ijhcs.2010.05.006https://doi.org/10.1016/j.ijhcs.2010.05.006
Moshagen, M., & Thielsch, M. (2012). A short version of the Visual Aesthetics of Websites Inventory. Behaviour & Information Technology, 32(12), 1305–1311. https://doi.org/10.1080/0144929X.2012.694910https://doi.org/10.1080/0144929X.2012.694910
O’Brien, H. (2016). Translating theory into methodological practice. In H. O’Brien, & P. Cairns (Eds.), Why engagement matters: Cross-disciplinary perspectives of user engagement in digital media (pp. 27–52). Springer. https://doi.org/10.1007/978-3-319-27446-1_2https://doi.org/10.1007/978-3-319-27446-1_2
Olsen, R., & Harlow, S. (2022). Creating library tutorials to provide flexibility and customized learning in asynchronous settings. Public Services Quarterly, 18(1), 19–33. https://doi.org/10.1080/15228959.2021.1896413https://doi.org/10.1080/15228959.2021.1896413
Sadita, L., Santoso, H. B., Windrawan, L. I., & Khotimah, P. H. (2022). An Indonesian adaptation of Visual Aesthetics of Website Inventory (VisAWI) Questionnaire for evaluating video game user interface. In Proceedings of the 2022 International Conference on Computer, Control, Informatics and Its Applications, (pp. 382–386). https://doi.org/10.1145/3575882.3575956https://doi.org/10.1145/3575882.3575956
Saleh, A. M., Abuaddous, H. Y., Alansari, I. S., & Enaizan, O. (2022). The evaluation of user experience on learning management systems using UEQ. International Journal of Emerging Technologies in Learning, 17(7), 145–162. https://doi.org/10.3991/ijet.v17i07.29525https://doi.org/10.3991/ijet.v17i07.29525
Saremi, M., Sadeghi, V., Khodakarim, S., & Maleki-Ghahfarokhi, A. (2023). Farsi version of Visual Aesthetics of Website Inventory (FV-VisAWI): Translation and psychometric evaluation. International Journal of Human-Computer Interaction, 39(4), 834–841. https://doi.org/10.1080/10447318.2022.2049138https://doi.org/10.1080/10447318.2022.2049138
Schankin, A., Budde, M., Riedel, T., & Beigl, M. (2022). Psychometric properties of the User Experience Questionnaire (UEQ). In S. Barbosa, C. Lampe, C. Appert, D. A. Shamma, S. Drucker, J. Williamson, & K. Yatani (Eds.), Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, (pp. 1–11). https://doi.org/10.1145/3491102.3502098https://doi.org/10.1145/3491102.3502098
Schrepp, M., Hinderks, A., & Thomaschewski, J. (2014). Applying the User Experience Questionnaire (UEQ) in different evaluation scenarios. In A. Marcus (Ed.), Design, User Experience, and Usability: Theories, Methods, and Tools for Designing the User Experience (pp. 383–392). Springer. https://doi.org/10.1007/978-3-319-07668-3_37https://doi.org/10.1007/978-3-319-07668-3_37
Schrepp, M., Hinderks, A., & Thomaschewski, J. (2017). Design and evaluation of a short version of the User Experience Questionnaire (UEQ-S). International Journal of Interactive Multimedia and Artificial Intelligence, 4(6), 103–108. https://doi.org/10.9781/ijimai.2017.09.001https://doi.org/10.9781/ijimai.2017.09.001
Schrepp, M., Sandkühler, H., & Thomaschewski, J. (2021). How to create short forms of UEQ+ based questionnaires? Mensch und Computer 2021-Workshopband. https://doi.org/10.18420/MUC2021-MCI-WS01-230https://doi.org/10.18420/MUC2021-MCI-WS01-230
Schrepp, M. (2023). User Experience Questionnaire handbook. https://www.ueq-online.org/Material/Handbook.pdfhttps://www.ueq-online.org/Material/Handbook.pdf
SCONUL Working Group on Information Literacy. (April 2011). The SCONUL seven pillars of information literacy: Core model for higher education. SCONUL.
Su, S.-F., & Kuo, J. (2010). Design and development of web-based information literacy tutorials. The Journal of Academic Librarianship, 36(4), 320–328. https://doi.org/10.1016/j.acalib.2010.05.006https://doi.org/10.1016/j.acalib.2010.05.006
Thielsch, M. T. & Moshagen, M. (2015). VisAWI manual (Visual Aesthetics of Websites Inventory) and the short form VisAWI-S (Short Visual Aesthetics of Websites Inventory). https://www.thielsch.org/download/VisAWI/VisAWI_Manual_EN.pdfhttps://www.thielsch.org/download/VisAWI/VisAWI_Manual_EN.pdf
Walters, K., Bolich, C., Duffy, D., Quinn, C., Walsh, K., & Connolly, S. (2014). Developing online tutorials to improve information literacy skills for second-year nursing students of University College Dublin. New Review of Academic Librarianship, 21(1), 7–29. https://doi.org/10.1080/13614533.2014.891241https://doi.org/10.1080/13614533.2014.891241
Yi, J. H., & Kim, H. S. (2021). User experience research, experience design, and evaluation methods for museum mixed reality experience. Journal on Computing and Cultural Heritage, 14(4), 1–28. https://doi.org/10.1145/3462645https://doi.org/10.1145/3462645
Zhang, L. (2006). Effectively incorporating instructional media into web-based information literacy. The Electronic Library, 24(3), 294–306. https://doi.org/10.1108/02640470610671169https://doi.org/10.1108/02640470610671169
Appendix: Questionnaires
Post-Tutorial Questionnaire Introduction
Congratulations! You have completed the tutorial! The following is a questionnaire with questions about your tutorial engagement, the tutorial content, the design of the tutorial, and the gamified aspects of the tutorial.
We really appreciate you answering these questions, as they will help us understand how to improve these tutorials for other students!
User Experience Questionnaire Short Form (Reference: Schrepp, M., Hinderks, A., & Thomaschewski, J. (2017). Design and evaluation of a short version of the user experience questionnaire (UEQ-S). International Journal of Interactive Multimedia and Artificial Intelligence, 4(6), 103–108.)
The following are pairs of contrasting attributes that may apply to the tutorial. You can express your agreement with the attributes by moving the slider according to your impressions with the tutorial.
Please decide spontaneously. Don’t think too long about your decision to make sure that you convey your original impression. It is your personal opinion that counts. Please remember: there is no wrong or right answer!
(7-point Likert scale using a slider between the two word choices)
obstructive/supportive
complicated/easy
inefficient/efficient
confusing/clear
boring/exciting
not interesting/interesting
conventional/inventive
usual/leading edge
VisAWI-S Visual Aesthetics of Websites Inventory: Short Version (Reference: Moshagen, M., & Thielsch, M. T. (2012). A short version of the Visual Aesthetics of Websites Inventory. Behaviour & Information Technology, 32(12), 1305–1311.)
Please rate the tutorial according to the following statements on the scale ranging from 1 (strongly disagree) to 7 (strongly agree).
(7-point Likert scale using stars)
Everything goes together in this tutorial.
The layout is pleasantly varied.
The colour composition is attractive.
The layout appears professionally designed.