Support data for “Effect of height perception on state self-esteem and cognitive performance in virtual reality”
The uploaded data contains research data supporting “Effect of height perception on state self-esteem and cognitive performance in virtual reality”.
We used the letter recall task established on Goldin-Meadow et al.’s cognitive test to measure participants’ working memory capacity. In each trial, we presented four distinctive letter pairs to facilitate the memorization and recollection process. In accordance with prior work, it was found that relative to letter sequence consisting of both vowels and consonants, those formed solely by consonants can yield a more optimum level of difficulty for testing. Hence, each trial of the letter recall tasks adopted in this study was developed based on eighty randomized consonant letters. In the initial phase of each trial, participants were asked to memorize the visually presented letter sequence within a 15 s exposure period. Subsequent to stimuli exposure, participants experienced a 25 s retention interval. In this phase, the mental rotation task was conducted without the presence of any visual prompt with respect to the letter pairs. This was immediately followed by a 10 s recollection period in order to facilitate serial recall and verbal report of the memorized letter pairs. Participants’ responses were recorded and scored by the experimenter, and reported as percentage correct in later analysis. Each correct letter pair was awarded with one mark, thus a maximum of forty marks could be attained in each block. No marks were given for letter pairs with noted transposition errors and incorrect combination of letters.
We used mental rotation task to examine participants’ spatial ability. Spatial tasks based on the Vandenberg and Kuse Mental Rotation Test were formulated using 3D figures in the Library of Shepard and Metzler-type Mental Rotation Stimuli. In this task, participants were first shown five figures in a 15 s exposure period. All stimuli were in rotation around the horizontal axis and presented in an identical white frame against a white background. A reference stimulus was positioned on top of four potential matching blocks, which were positioned in various orientations and labelled as option “A”, “B”, “C” and “D” respectively. Participants were asked to orient mental representations of the stimuli for dynamic comparisons, then verbally report two figures that shared the same configuration with the reference stimuli in the subsequent 10 s response period. Participants’ responses, represented by two different alphabets, were recorded and scored by the experimenter, and reported as percentage correct in later analysis. One score was awarded for each correct response, which can accumulate to a maximum of twenty marks per condition. In addition to being a stand-alone assessment of visuospatial ability, the mental rotation task also serves the purpose of inducing cognitive load in the letter recall process.
We used the State Self-Esteem Scale, a well-validated and psychometrically sound measurement, to measure momentary fluctuations of individual self-esteem subsequent to height manipulation. It comprises twenty self-report items, each rated on a 5-point Likert scale (1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Very much, and 5 = Extremely). Three subscales were used to conduct multidimensional assessment in the specific facets of appearance, performance, and social self-esteem. In the current study, participants were asked to rate the SSES for how they perceive themselves in relation to their experience in the virtual environment.
Following a within-group experimental design, paired-sample t-test analyses were carried out using jamovi (version 1.2.27). Outcome variables were compared between the normal and increased height conditions. The order of conditions was counter-balanced in order to minimize potential confounding influence of the sequence, for instance, cognitive fatigue and learning effects. We checked for normality using Shapiro-Wilk test.
Characterization of human performance in virtual environment using a multi-modal deep learning framework
University Grants CommitteeFind out more...