Comparison of Dexterous Task Performance in Virtual Reality and Real-World Environments

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

The following is a featured report authored by Janell S. Joyner, Monifa Vaughn-Cooke and Heather L. Benz. We keep well on top of the latest industry perspectives and researchers from academics globally- it’s our business. So to bring that to you we share our favourite reports to give greater exposure to some of the leading minds and researchers in the mixed reality, immersive technology and projection fields- enjoy!

  • 1Department of Mechanical Engineering, University of Maryland College Park, College Park, MD, United States
  • 2Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, U. S. Food and Drug Administration Center for Devices and Radiological Health, Silver Spring, MD, United States
  • 3ORISE Research Fellow, Oak Ridge Institute for Science and Education, Oak Ridge, TN, United States

Virtual reality is being used to aid in prototyping of advanced limb prostheses with anthropomorphic behavior and user training. A virtual version of a prosthesis and testing environment can be programmed to mimic the appearance and interactions of its real-world counterpart, but little is understood about how task selection and object design impact user performance in virtual reality and how it translates to real-world performance. To bridge this knowledge gap, we performed a study in which able-bodied individuals manipulated a virtual prosthesis and later a real-world version to complete eight activities of daily living. We examined subjects’ ability to complete the activities, how long it took to complete the tasks, and number of attempts to complete each task in the two environments. A notable result is that subjects were unable to complete tasks in virtual reality that involved manipulating small objects and objects flush with the table, but were able to complete those tasks in the real world. The results of this study suggest that standardization of virtual task environment design may lead to more accurate simulation of real-world performance.

Introduction

IT was estimated in 2005 that there were two million amputees in the United States, and this number was expected to double by 2050 (Ziegler-Graham et al., 2008McGimpsey and Bradford, 2017). The prosthesis rejection rate for upper limb (UL) amputees has been reported to be as high as 40% (Biddiss E. A. and Chau T. T., 2007). Among the reasons for prosthesis rejection is difficultly when attempting to use the prosthesis to complete activities of daily living (ADLs), such as grooming and dressing (Biddiss E. and Chau T., 2007). The prosthesis control scheme plays an important role in object manipulation, preventing objects from slipping out of or being crushed in a prosthetic hand. Improving the response time of the device, the control scheme (i.e., body-powered vs. myoelectric control), and how the device signal is recorded (external vs. implanted electrodes) will help with ensuring that amputees can complete ADLs with less difficulty (Harada et al., 2010Belter et al., 2013). Programs such as the Defense Advanced Research Projects Agency (DARPA) Hand Proprioceptive and Touch Interfaces (HAPTIX) program have been investigating how to improve UL prosthesis designs (Miranda et al., 2015).

Building advanced prostheses is expensive and time consuming (Hoshigawa et al., 2015Zuniga et al., 2015), requiring customization for each individual and integration of advanced sensors and robotics (Biddiss et al., 2007van der Riet et al., 2013Hofmann et al., 2016). To efficiently study advanced UL prostheses in a well-controlled environment prior to physical prototyping, a virtual version can be used (Armiger et al., 2011). The virtual version can be programmed and calibrated in a manner similar to a physical prosthesis and can be used to allow amputees to practice device control schemes with simulated objects (Pons et al., 2005Lambrecht et al., 2011Resnik et al., 2011Kluger et al., 2019).

Virtual reality (VR) has also been used to aid in clinical prosthesis training and rehabilitation. A prosthetist can load a virtual version of an amputee’s prosthesis to allow him/her to practice using the control scheme of the prosthesis (e.g., muscle contractions for a myoelectric device or foot movements for inertial measurement units) (Lambrecht et al., 2011Resnik et al., 2012Blana et al., 2016). A variety of VR platforms exist for this purpose, but there is a gap in the literature about what tasks and object characteristics need to be replicated in VR to predict real world (RW) performance. A better understanding of how to design and translate results from VR to RW is needed to inform clinical practice. This paper presents a study comparing performance of virtual ADLs with a virtual prosthesis with RW ADL using a physical prosthesis. We examined what factors affect performance in VR to determine if these factors translate to RW performance. This work will inform the design of VR ADLs for training and transfer to RW performance.

Background

Clinical Outcome Assessments

Clinical outcome assessments (COAs) are used to evaluate an individual’s progress through training or rehabilitation with their prosthetic device. Research has shown that motor control learning is highly activity specific (Latash, 1996Giboin et al., 2015van Dijk et al., 2016); therefore, selecting training activities is important to help a new prosthesis user return to a normal routine. However, few COAs have been developed to assess upper limb prosthesis rehabilitation progress; therefore, activities for assessing function with other medical conditions, such as stroke or traumatic brain injury (TBI), are used (Wang et al., 2018). One such test is the Box and Blocks Test (BBT) (Mathiowetz et al., 1985Lin et al., 2010), in which subjects complete a simple activity that is not truly reflective of an activity that a prosthesis user would perform in daily life. The goal of the BBT is to move as many blocks as possible from one side of a box over a partition to the other side in 60 s. Researchers have made modifications to the BBT to assess an individual’s ability to perform basic movements with their prosthesis (Hebert and Lewicke, 2012Hebert et al., 2014Kontson et al., 2017).

Another clinical outcome assessment that has been used to assess UL prosthetic devices is the Jebsen–Taylor Hand Function Test (JTHFT). The JTHFT is a series of standardized activities designed to assess an individual’s ability to complete ADLs following a stroke, TBI, or hand surgery (Sears and Chung, 2010). The seven activities in the JTHFT are simulated feeding, simulated page turning, stacking checkers, writing, picking up large objects, picking up large heavy objects, and picking up small objects. Individuals are timed as they complete each activity, and their results are compared with normative data (Sears and Chung, 2010). Studies have been performed with the UL amputee population to validate the use of the JTHFT as a tool to assess prosthetic device performance (Wang et al., 2018). This assessment’s use of simulated ADLs makes it a better candidate than the BBT for assessing how a person would use a prosthesis in daily life.

Research has also been performed to develop COAs specifically to assess upper limb prosthesis rehabilitation progress. The Activities Measure for Upper Limb Amputees (AM-ULA) (Resnik et al., 2013) and Capacity Assessment of Prosthetic Performance for the Upper Limb (CAPPFUL) (Kearns et al., 2018) were designed to test an amputee’s ability to complete ADLs with their device. These two COAs consist of 18 and 11 ADLs, respectively, and assess a person’s ability to complete the activity, time to completion, and movement quality.

While these activities can be completed with a physical prosthetic device, training in a virtual environment has shown to be an effective way to train amputees to use their device (Phelan et al., 2015Nakamura et al., 2017Perry et al., 2018Nissler et al., 2019). Training in a virtual environment can be a cost effective way for clinics to perform rehabilitation (Phelan et al., 2015Nakamura et al., 2017) and help prosthesis users learn how to manipulate their device using its particular control scheme (Blana et al., 2016Woodward and Hargrove, 2018), and gamifying rehabilitation has been shown to increase a prosthesis user’s desire to complete the program (Prahm et al., 20172018).

Virtual Reality Prosthesis Testing and Training Environments

Several VR testbeds have been created or adapted to evaluate different aspects of prosthesis development. The Musculoskeletal Modeling Software (MSMS) was originally developed to aid with musculoskeletal modeling (Davoodi et al., 2004), but was later adapted for training, development, and modeling of neural prosthesis control (Davoodi and Loeb, 2011). The Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE) was developed to support the study of human assistive robotics and prosthesis operations (Katyal et al., 2013). Users that interact with the HARMONIE system control their device through surface electromyography (sEMG), neural interfaces (EEG), or other control signals (Katyal et al., 20132014McMullen et al., 2014Ivorra et al., 2018). Another tool, Multi-Joint dynamics with Contact (MuJoCo), is a physics engine that was originally designed to facilitate research and development in robotics, biomechanics, graphics, and animation (Todorov et al., 2012). MuJoCo HAPTIX was created to model contacts and provide sensory feedback to the user through the VR environment (Kumar and Todorov, 2015). Studies are being performed to improve the contact forces applied to objects in MuJoCo HAPTIX (Kim and Park, 2016Lim et al., 2019Odette and Fu, 2019). These testbeds aid in training and studying of prosthesis control in VR, but little is known about how VR object characteristics impact performance.

User Performance Assessment

Simulations should require visual and cognitive resources similar to those needed to complete the activity in the real world (Stone, 2001Gamberini, 2004Stickel et al., 2010). While previous studies evaluated VR testbeds or activities implemented in them (Carruthers, 2008Cornwell et al., 2012Blana et al., 2016), none have identified the characteristics of the tasks that make an activity easy or difficult to complete in VR. Subjects in these studies did not complete ADLs from COAs that have been validated with a UL population, which could limit the ability to replicate and retest these tasks for RW study.

Study Objectives

The purpose of this study is to provide preliminary validation for a VR system to test advanced prostheses through comparison with similar RW activity outcomes. In addition, this study aims to gain a better understanding of how activity design affects an individual’s ability to complete virtual activities with a virtual prosthetic hand. The activities used in this study are derived from existing, validated UL prosthesis outcome measures that are used to evaluate prosthesis control. Motion capture hardware and software were used to collect normative data from able-bodied individuals to determine how activity selection and virtual design affects the completion rate, completion time, and number of attempts to complete the activity. By replicating validated outcome measures in VR, the results from the VR performance was then compared with RW task performance to assess how VR performance translates to RW performance.

Methods

Task Development

MuJoCo HAPTIX (Roboti, Seattle, Washington) is a VR simulator that has been adapted to the needs of the DARPA HAPTIX program by adding an interactive graphical user interface (GUI) and integrating real-time motion capture to control a virtual hand’s placement in space (Kumar and Todorov, 2015) (Figure 1). MuJoCo is open source and can be used to test other limb models as well. Four tasks were designed in the MuJoCo HAPTIX environment to study movement quality: (1) hand pose matching, (2) stimulation identification and use of proprioceptive feedback and (3) sensory feedback to identify characteristics of an object, and (4) object manipulation. This research focuses on the MuJoCo object manipulation task, which is based on existing COAs, the JTHFT and the AM-ULA.FIGURE 1

Figure 1. The virtual environment, Multi-Joint dynamics with Contact (MuJoCo) Hand Proprioceptive, and Touch Interfaces (HAPTIX).

Task Selection and Analysis

Eight ADLs from the AM-ULA (Resnik et al., 2013) and JHFT (Sears and Chung, 2010) were completed in VR and in RW (Figure 2 and Table 1). The tasks selected for replication from the JHFT and AM-ULA were chosen for their capacity to assess both prosthesis dexterity and representative ADLs such as food preparation and common object interaction. The moving cylinders (Move Cyl.) task is representative of activities that require subjects to move a relatively large object. The place sphere in cup (Sphere cup), lock/key (Lock Key), and stack checkers (Checkers) tasks are representative of activities that require precise manual manipulation to move a small object. The spoon transfer (Spoon Tran.) and writing tasks required rotation and precise targeting. Research has shown that tasks requiring small objects to be manipulated require more dexterous movement, while tasks where large objects are manipulated require more power and less dexterity (Park and Cheong, 2010Zheng et al., 2011).

Figure 2. The tasks that subjects completed. In order: (A) Task 1: move cans to targets, (B) Task 2: put ball in pitcher, (C) Task 3: pour ball in bowl, (D) Task 4: transfer ball with spoon, (E) Task 5: insert key and turn, (F) Task 6: turn knob, (G) Task 7: stack squares, and (H) Task 8: simulated writing.

Table 1. Description of the tasks and task name abbreviations.

A hierarchical task analysis (HTA) was performed on each of the ADLs to understand what steps or subtasks need to be completed in order to complete the ADL high-level goals. An HTA is a process used by human factor engineers to decompose a task into subtasks necessary for completion, which can help to identify use difficulty or use failure for product users (Patrick et al., 2000Salvendy, 2012Hignett et al., 2019). The HTA used for this research focused on the observable physical actions that a person must complete. To ensure that the number of steps presented in the HTA provided sufficient depth for understanding necessary components of the tasks, the instructions for the AM-ULA and the JHFT were referenced to inform the ADL subtask decomposition.

The descriptions of the subtasks utilized seven action verbs: reach, grasp, pick up, place, release, move, and rotate (Table 1) These action verbs were picked due to their use in describing the steps to complete tasks in the AM-ULA (Resnik et al., 2013). Reach consists of moving the hand toward an object by extension of the elbow and protraction of the shoulder. Grasp involves flexion of the fingers of the hand around an object. Pick up includes flexion of the shoulder and potentially the elbow to lift the object from the table. Move consists of medial or lateral rotation of the arm to align the primary object toward a secondary object or shifting the hand away from one object and aligning it with another. Place involves extension of the elbow to lower the object onto its target. Release involves extension of the fingers to let go of the object. Rotation consists of pronation or supination of the arm to rotate an object.

Subjects

Able-bodied individuals were recruited for this study due to limited availability of upper limb amputees. Prior studies have used able-bodied individuals, with the use of a bypass or simulator prosthesis, to assess the ability to complete COAs and ADLs with different prosthesis control schemes (Haverkate et al., 2016Bloomer et al., 2018). These studies showed that the use of able-bodied subjects allows the experimenter to control for levels of experience with a prosthetic device and that performance between the able-bodied group and amputee group is comparable.

Twenty-two individuals (10 females, average age of all subjects 35 ± 17 years) completed the VR experiments, and 22 individuals (eight females, average age of all subjects 38 ± 16 years) completed the RW experiments. The VR experiment was completed first, followed by the RW experiment to provide a comparative evaluation of virtual task performance and its utility for this application. Only two subjects overlapped between the two groups due to the amount of time between completing the VR experiment and being given access to the physical prosthesis. Because participants learned techniques for completing tasks that could generalize across RW/VR environments, and we intended to measure naïve performance, our study design did not include completion of the tasks in both environments. All subjects were right-handed. No subjects reported upper limb disabilities. Subject participation was approved by the FDA IRB (RIHSC #14-086R).

Materials

Virtual Reality Equipment

The VR software used was MuJoCo HAPTIX v1.4 (Roboti, Seattle, Washington), with MATLAB (Mathworks, Natick, MA) to control task presentation. Computer and motion capture (mocap) component specifications can be found on mujoco.org/book/haptix.html. Subjects manipulated the position of the virtual hand with Motive software (OptiTrack, Corvallis, OR), mocap markers, and an OptiTrack V120: Trio camera (OptiTrack, Corvallis, OR) while using a right-handed CyberGlove III (CyberGlove Systems LLC, San Jose, CA) to control the fingers.

Real-World Equipment

The RW experiments were performed with the DEKA LUKE arm (Mobius Bionics, Manchester, NH) attached to a bypass harness. The bypass harness allowed able-bodied subjects to wear the prosthetic device. Inertial measurement units (IMUs), worn on the subject’s feet, controlled the manipulation of the wrist and grasping (Resnik and Borgia, 2014Resnik et al., 2014a,bResnik et al., 2018a,bGeorge et al., 2020). The objects used in the RW experiment were modeled after the ones manipulated in VR.

Experimental Setup and Procedure

Virtual Reality Experiment

Mocap setup was performed before starting each experiment. Reflective markers were placed on the monitor, and subjects were assisted with donning the CyberGlove III and a mocap wrist component (Supplementary Figure 2). Subjects could only use their right hand to manipulate the virtual prosthesis. The height and spacing of the OptiTrack camera were adjusted to ensure that the subject could reach all of the virtual table (Figure 3A). A series of calibration movements was performed to align the subject’s hand movements with the virtual hand on the screen. The movements required the subject to flex and extend his or her wrist and fingers maximally. Once the series of movements was completed, the subject moved his or her hand and observed how the virtual hand responded. If the subject was satisfied with the hand movement, then the experiment could begin.

Figure 3. Virtual reality (VR) and real-world (RW) experiment setups. (A) VR setup: Subjects were seated in front of a computer monitor, and a motion capture camera was placed to their right. The height and placement of the camera was adjusted to allow subjects to interact with the virtual table. (B) RW experimental setup. The subject sat in front of the table with a camera to their left to capture their performance for later review. A template was placed on the table to match where the objects would appear in the virtual environment. A counter-weight system was used to offset the torque placed on the subject’s arm by the DEKA Arm bypass attachment.

The task environment was opened in MuJoCo, and operation scripts were loaded in MATLAB. MuJoCo recorded the subject’s virtual performance for analysis. MATLAB scripts controlled when the tasks started, progressed the experiment through the tasks, and created a log file for analysis. Log files contained the task number and time remaining when the subject completed or moved on to the next task.

Task objects were presented to the subjects one at a time. Instructions were printed on the upper-right hand corner for 3 s and then replaced with a 60-s countdown timer signifying the start of the task. If the subject completed the task before time ran out, then he or she could click the next button to move on. Each task is completed twice in immediate succession. If the subject was unable to complete the task before time ran out, then the program automatically moved on to the next task. Analysis was performed on task completion, number of attempts to complete the task, and time to complete tasks.

Real-World Experiment

This experiment was performed following the VR experiment. Subjects tended to struggle with various aspects of completing task in VR. The VR tasks were replicated in RW based on the virtual models provided, and a physical version of the prosthetic was used for the experiments. This real-world follow-up experiment was performed to better understand which task characteristics need to be improved in the virtual design for more realistic comparison to its real-world counterparts.

Subjects were given a brief training session on how to manipulate the prosthesis before starting the experiment. Training was done to familiarize subjects with the control schema of the device and would be insufficient to affect the task success rates (Bloomer et al., 2018). The training began with device orientation, which included safety warnings, arm componentry, and arm control (Figure 4). The IMUs were then secured to the subject’s shoes, and the prosthetist software for training amputees was displayed to the subjects to allow them to practice the manipulation motions. The left foot controlled the opening and closing of a hand grasp (plantarflexion and dorsiflexion movements, respectively) as well as grasp selection (inversion and eversion movements, respectively). The right foot controlled wrist movements: flexion and extension (plantarflexion and dorsiflexion movements, respectively), as well as pronation and supination (inversion and eversion movements, respectively). The speed of the hand and wrist movement was proportional to the steepness of the foot angle; the steeper the angle, the faster the motion. A reference sheet displaying foot controls and the different grasps was placed on the table for subjects to reference throughout training and the experiment.FIGURE 4

Figure 4. The DEKA Arm was attached to a bypass to allow able-bodied individuals to wear the prosthesis.

Subjects were given a total of 10 min to practice the device control scheme. The first 5 min was used to practice controlling a virtual version of the device in the prosthetist software, and the next 5 min was used to practice wearing the device and performing RW object manipulation.

Training objects were removed from the table at the end of training, and the task objects were brought out. A camera captured subjects’ task completion attempts for later analysis. For each task, objects were placed on the table in the locations in which they would appear in VR (Figure 3B). Subjects could select the grasp they wanted to use and ask any questions after hearing the explanation of the task. Grasps could be changed during the attempt to complete the task, but the task timer would not be stopped. The experimenter started the camera after confirming with the subject that they were ready to begin. Task completion, attempts, time to complete, and additional observations were recorded by the experimenter as the subject attempted to complete the task (Figure 3).

The primary differences between the VR and RW setups were the control schemes used and training. This study focused on examining what characteristics can make a task difficult to complete in VR where subjects can manipulate the virtual device with their hand. This was done to show a best-case scenario control scheme. In the VR setup, subjects used a CyberGlove to control the virtual prosthetic. This allowed subjects to use their hand in a manner that replicated normal motion to complete object manipulation tasks; therefore, no training was necessary. The RW experiment used a different control scheme because the only marketed configuration of the DEKA limb uses foot control. Since the subjects were able-bodied individuals with no UL, impairment training was provided on device operation.

Virtual Reality and Real World Data Analysis

Task completion rate, number of attempts, task completion time, and movement quality were examined to evaluate task design in VR and compare against RW results. These attributes were chosen because they could provide a comparative measure of task difficulty. A task analysis was performed to decompose the tasks into subtasks that must be completed to complete the task. Task completion is binary; if a subject partially completed a task, then it was marked as incomplete. Completion rate was calculated by summing the total number of completions and dividing it by the total number of attempts across all subjects. Subtasks were also rated on a binary scale for completion to better understand what parts of a task posed the most difficulty. This information, paired with object characteristics and interactions, provided insight into each activity and the motion requirements.

Task attempts were defined as the number of times a subject picked up or began interacting with an object and began movement toward task completion. Attempts at each of the subtasks was examined as well. Since there were numerous techniques a subject could use to complete the tasks, each subject’s recording of their performance was reviewed.

Time remaining for the VR tasks was converted to completion time by subtracting the time remaining from the total time. Completion time, a continuous variable, was defined by how much time it took subjects to complete a task. Completion time for the subtasks and the tasks as a whole was compared to understand whether object characteristics and interactions affected task difficulty.

Movement quality was defined by the amount of awkwardness and compensatory movements a subject used during their attempts to complete a task (Resnik et al., 2013van der Laan et al., 2017). Compensatory movements are atypical movements that are used to complete tasks, e.g., exaggerated trunk flexion to move an object (Resnik et al., 2013). These compensatory movements, along with adding extra steps toward subtask completion such as repeatedly putting an object back on the table to reposition it in the hand add awkwardness to how a subject moves (Levin et al., 2015). The amount of awkwardness and compensatory movements are expected to negatively impact movement quality. A scale, based on the one developed in the AM-ULA, was used to quantify movement quality for each subtask. In the AM-ULA, a five-point Likert scale is used where 0 points are given if a subject is unable to complete a task and four points are given if the subject completes the task with no awkwardness. The lowest score received for a subtask in the AM-ULA is the score given for the entire task. Reducing a task score down to one value was not performed in this experiment to provide granularity and insight into which subtasks caused the most difficulty for subjects. A modified version of this scale was used to assess the subtasks of each task. This modified scale rated movement quality on a four-point numerical scale; 1, meaning the subject moved very awkwardly with many compensatory movements, to 4, meaning excellent movement quality with no awkwardness or compensatory movement. A score of N/A was recorded if a subject did not progress to the subtask before running out of time.

To analyze the data, log files were run through a custom MATLAB script (publicly available at github.com/dbp-osel/DARPA-HAPTIX-VR-Analysis), and the VR recordings were played in an executable included with MuJoCo. The VR recordings were inspected to verify that the task was completed and to identify the number of attempts to complete a task. The task log file was exported at the end of each experiment containing the task completion time for off-line analysis. Statistical analysis was performed with a custom script written in R. A McNemar test was used compare completion rate differences. A Mann–Whitney U test was used to compare attempt rate and completion time. All statistical tests were run with α = 0.05 and with Bonferroni correction. The tasks were compared to determine whether there was a significant difference in task difficulty based on task design. Subtasks scores and values (e.g., time in seconds) were averaged across all subjects for each of the high-level tasks. This provided a quick view of which subtasks were the most difficult for subjects to complete.

Results

Virtual Reality Task Completion Rate

Tasks Sphere Cup, Spoon Tran., Lock Key, and Checkers could not be completed by the subjects (p = 1), as shown in Tables 2, 3, (statistical comparison of task completion rate in VR for all tasks; p-values produced from the McNemar test where α = 0.05). Values with an * and highlighted in gray were found to be statistically significant. The completion rate for Move Cyl was not significantly different from the aforementioned tasks (p = 0.0625). Tasks Sphere Bowl, Doorknob, and Writing had the highest completion rates and were found to have a statistically significant difference (p < 0.05) from tasks Sphere Cup, Spoon Tran., Lock Key, and Checkers. Of the seven subtask actions (reach, grasp, pick up, place, release, move, and rotate), the reach action had the highest completion rate regardless of the high-level task (82.73%) Tables 4, 5) .

Table 2. Summary of analyzed task characteristics for virtual reality (VR) and real world (RW).

Table 3. Statistical comparison of task completion rate in VR for all tasks.

Table 4. Summary of analyzed subtask characteristics for VR and RW.

Table 5. Average and standard deviation for VR characteristic values across subtasks and their high-level tasks.

Virtual Reality Task Completion Time

Since tasks Sphere Cup, Spoon Tran., Lock Key, and Checkers could not be completed by the subjects, there was no completion time data to compare between them resulting in no p-values to report. The remaining tasks were all found to have a statistically significant difference in completion time (p < 0.05) (Table 6). On average, subjects took the longest to complete the reach and move actions; taking 5.96 ± 8.55 s and 8.3 ± 10.83 s, respectively (Tables 4, 5)

Table 6. Statistical comparison of task completion time for all tasks in VR.

Virtual Reality Task Attempt Rate

The average number of attempts at a task can be seen in Figure 7. Tasks that had a higher average attempt rate were most often found to have a lower completion rate. Tasks Sphere Cup, Sphere Bowl, and Doorknob had no statistical difference in attempt rates (p > 0.05) due to their low attempt rate. Tasks Lock Key, Checkers, and Writing had no statistical difference due to their high attempt rates (p > 0.05). All remaining tasks varied in the number of attempts and were found to have a statistically significant difference in attempt rate from one another (Table 7). Subjects used the most attempts to complete the Grasp action with an average of 4.48 ± 3.83 attempts. The pick up, release, and rotate actions all had less than one attempt on average due to subjects not making it to these subtasks often (0.3 ± 0.55, 0.07 ± 0.25, and 0.58 ± 1.13 attempts, respectively) (Tables 4, 5)

Table 7. Statistical comparison of task attempt rate for all tasks in VR.

Real-World Task Completion Rate

Task completion rate varied between the two task environments (Table 5). As mentioned previously, Sphere Bowl, Sphere Tran., Lock Key, and Checkers could not be completed in VR Table 2 . The Doorknob task was the only task that could be completed 100% of the time in VR and RW. Subjects were able to complete all seven subtask actions with over 95% accuracy regardless of the high-level task (Tables 4 & 8).

Figure 5. VR and RW task completion percentage for all subjects. Subjects were only able to complete a subset of the tasks in VR, while they were able to complete all the tasks in RW.

Table 8. Average and standard deviation RW characteristic values across sub-tasks and their high-level tasks.

Real-World Task Completion Time

On average, subjects were able to complete the majority of the tasks faster in RW than in VR (Table 5 & 6). The Doorknob task was the only task that subjects were able to complete faster in VR than in RW. If a task could not be completed, then the data were excluded from the summary statistics. Subjects were able to complete all seven subtask actions in <1 s on average, regardless of the high-level task (Table 4, 8).

Figure 6. Average time it took subjects to complete tasks in VR vs. RW. Tasks 2, 4, 5, and 7 do not have an average completion time in VR because they could not be completed. Task 6 was the only task that subjects were able to complete faster in VR than in RW. Error bars display standard deviation of the data.

Real-World Task Attempt Rate

On average, subjects required more attempts to complete tasks in VR than in RW (Figure 7). The Lock Key and Checkers tasks took the most attempts to complete in VR. The Spoon Tran. and Lock Key tasks required the most attempts in RW. Most subtask actions took an average of approximately one attempt to complete (Tables 4,8).

Figure 7. Average number of attempts subjects made while trying to complete a task in VR vs. RW. All tasks required fewer attempts in RW than in VR. The characteristics of the items in the tasks (e.g., small size) had a more marked effect on number of attempts in VR than in RW. Error bars display standard deviation of the data.

Motion Quality and Subtask Analysis

Tables 5, 8 present the average and standard deviations for motion quality (MQ), completion rate (CR), time (T), and attempt rate (AR) for VR and RW, respectively. All subtask actions were not required across all tasks, and in some cases, subjects did not attempt to complete the subtask; these areas are marked with “NA” on the table. Across all tasks in VR, the reach action had the highest average motion quality (>2 points), denoted in green on the table. Completion rate was above 80% for subtasks with a motion quality score greater than two points in VR. Subtask actions that had a motion quality score of less than two points (denoted in red on the table) had a completion rate that was <50% on average.

In the RW environment, the only subtask action to have an average motion quality score <1 was rotate during the Lock and Key task with an average score of 0.917 ± 1.58 (Table 8). Tasks with a motion quality score above tow points had an average completion rate above 50%.

Discussion

Virtual Reality and Real-World Task Completion Rate

Tasks with a low completion rate were difficult due to task characteristics and potential object interactions (Table 2). Subjects’ task performance varied greatly between the two used environments. In VR, subjects struggled to complete Move Cyl., Sphere Bowl, and Writing tasks while being completely unable to complete Sphere Cup, Spoon Trans., Lock Key, and Checkers tasks. In the RW, subjects were able to complete all the tasks, but struggled the most with the Lock Key task. The differences in performance can be attributed to the contact modeling in VR and object occlusion. Subjects reported an experience of “inaccurate friction,” which caused objects to slip out of the virtual hand more often than they would have in RW. Unrealistic physics in object interactions in VR has been shown to have a negative impact on a user’s experience (Lin et al., 2016McMahan et al., 2016Höll et al., 2018). This lack of accurate physics causes a mismatch between the user’s perception of what should happen and what they are seeing. Improvements are being made to physics calculations to more accurately calculate how an object should respond to touch (Todorov et al., 2012Höll et al., 2018).

In VR, it was more difficult for subjects to see around their virtual hand to interact with the objects on the table. Because head tracking was not used in this experiment, the only way for them to see the task items from a different perspective was to use a mouse to turn the VR world camera, but this approach would provide a view that could be disorienting if it did not reflect the orientation of the hand. Object contact and occlusion also affected RW performance. In the Lock Key task, subjects tended to have difficulty picking the key up from the table and would occasionally apply too much force to the key. This would cause the key to fly off the table. The prosthetic hand would also block the subject’s view of the key, thus leading the subject to lean from side to side to get a better view. There were cases where the subjects would accidently slide the key off the table when the key was occluded.

The subtask action that inhibited completion rate the most in the both environments was the grasp action (Tables 5, 8) . If subjects were unable to grasp an object, then they could not progress through the rest of the task. Grasp failure was caused by the object falling out of the prosthetic hand causing the subject to start over or the object falling off the table. Grasping, flexion of the fingers around an object is a necessary action to perform many ADLs (Polygerinos et al., 2015Raj Kumar et al., 2019). Grasping requires precise manipulation of the fingers to form a grasp and apply enough force to keep an object from slipping free as well as deformation of the soft tissue in the hands around an object (Ciocarlie et al., 2005Iturrate et al., 2018). Researchers are developing methods to allow prosthetic devices to detect object slippage as well as the design of the prosthetic itself to allow for more human-like motion or finger deformation (Odhner et al., 2013Stachowsky et al., 2016Wang and Ahn, 2017). The ability to grasp reliably with a prosthetic device is of high importance to amputees that use prostheses, and the lack of this ability can result in amputees choosing not to use a prosthetic device (Biddiss et al., 2007Cordella et al., 2016).

Virtual Reality and Real-World Task Completion Time

Subjects on average were able to complete the tasks faster in RW than in VR. Object contact and occlusion affected these results as well. With each failure to maintain object contact in the RW and VR environments, subjects were required to restart the object manipulation attempt. When objects were occluded while attempting object interactions, it would take time to realize missed object pickups, or time was spent to manipulate objects into high-visibility locations to ease interactions. The door knob task was the only task subjects completed faster in VR than in RW because it was easier to turn the virtual door knob. The resistance to turn the door knob was very low; thus, minimal contact was needed. The control scheme for the RW prosthesis could have slowed down the completion time for this task as well. The rotation speed of the RW prosthesis wrist was proportional to the tilt angle of the subject’s foot. For example, the Doorknob task could be completed faster if the subject used a steeper inversion angle to make the wrist rotate faster.

Virtual Reality and Real-World Task Attempt Rate

Attempt rate and completion rate were negatively correlated for most of the tasks. Tasks Lock Key and Stacking Checkers had the highest attempt rates out of all the tasks and the lowest completion rates due to small object manipulation and occlusion. This is also reflected in the increased number of attempts at the grasp subtask action in these tasks (Tables 5, 8). In comparison, Tasks Sphere Bowl and Doorknob had the lowest attempt rates and high completion rates due to the manipulation of large objects or objects locked onto the table. However, Tasks Sphere Cup and Writing did not show the same negative relationship. Task Sphere Cup had a low attempt rate due to its early exclusion action that also contributed to the low completion rate. Task Writing had a high attempt rate due to the round pen being flush with the table causing it to roll away from the subjects as they attempted to pick it up. However, the subjects were able to prevent the pen from rolling off the table, allowing them to complete the task.

Repeated, ineffective attempts at completing a task can negatively impact a person’s willingness to use a prosthetic device. Gamification of prosthesis training is intended to make prosthesis training more enjoyable and provide a steady stream of feedback (Tabor et al., 2017Radhakrishnan et al., 2019), though these training games need to be designed appropriately to avoid unnecessary frustration. Training and device use frustration has been shown to cause people to stop using their device (Dosen et al., 2015).

Effect of Motion Quality on Completion Rate

Motion quality scores were positively correlated with task completion rate in both environments. Object view obstruction contributed to the decrease in motion quality scores. Subjects would flex and abduct their shoulders or perform lateral bending of their torso in an effort to view around the prosthetic device they were using. Subjects were also more likely to use compensatory movements when they knew they were running out of time to complete the task. Between the two environments, VR had lower motion quality scores, which is due to the slow movement of subjects while attempting to complete these tasks and the rushed reactions to objects moving away from them. Compensatory movements are known to put extra strain on the musculoskeletal system (Carey et al., 2009Hussaini et al., 2017Reilly and Kontson, 2020Valevicius et al., 2020). This strain can eventually lead to injuries that could cause an individual to stop using their prosthesis. It is important for prosthetists to identify compensatory movements and help train amputees to avoid habitually relying on these types of motions.

Study Limitations

The lack of RW-like friction, object occlusion, and prosthesis control issues all negatively affected the results. These factors made it difficult for subjects to complete tasks, increased the amount of time needed to complete a task, and required subjects to make multiple attempts to complete the task. While task completion strategies positively impacted the results, the tactics that could be applied in one environment were not always compatible with the other environment. In RW, subjects would slide objects to the edge of the table to give themselves access to another side of the object to interact with or to make it easier to get their prosthesis under the object. This tactic could not be applied in VR due to the placement of motion capture cameras and the inability of the hand to go beneath the plane of the table top. Future VR environments should allow subjects to practice all possible RW object manipulation tactics and control in restricting possible tactics to prosthetists for training purposes. Future work will need to explore the use of within-subject design to study the translatability of findings between the two environments.

Another limitation is the difference in training between the two environments. Subjects in the VR experiment were not given training or time to practice picking up objects. The use of the CyberGlove allowed subjects to use their hand to manipulate the virtual prosthetic, therefore reducing, the need to train on device control, but subjects did not know how the virtual prosthesis and objects would interact. Practicing object manipulation on non-task-related items may have improved performance outcomes in VR. While subjects in the RW experiment were given training, it was not significant enough to impact performance. In a study by Bloomer et al., they showed that it would take several days of training to improve performance with a bypass prosthetic (Bloomer et al., 2018). The training given to subjects in this experiment was meant to provide them with baseline knowledge on how to use the device. Future work should provide light training for subjects in VR and RW to ensure that subjects have comparable baseline knowledge.

Conclusions

The results showed that performance between the two used environments can vary greatly depending on task design in VR and the used environment in RW. VR could be used to help device users practice multiple methods to complete a task to later inform strategy testing in RW.

Given the results of this study, virtual task designers should avoid placing objects flush with a table and requiring subjects to manipulate very small objects, and ensure that contact modeling is sufficient for object interactions to feel “natural.” Objects that are flush with the table and small can be easily occluded. Task objects would be less likely to fall out of the virtual hand with improved contact modeling when subjects are attempting different grasps. These factors make it difficult to manipulate objects in VR, causing inaccurately poor results that limit the translatability of the training and progress tracking. The results of the move cyl., sphere bowl, doorknob, and writing tasks were most similar between the VR and RW environments, suggesting that these tasks may be the most useful for VR training and assessment.

Prosthetists using VR to assist with training should use VR environments in intervals and assess frustration with the training. Performing VR training in intervals would provide time for both the prosthetist and amputee to assess how this style of training is working. Reducing the amount of frustration will improve training and help reduce the chance of the amputee forgoing his/her prosthetic.

Additional research is needed using the same prosthesis control schemes between the two environments. Two different control schemes were used in this study, one natural control (“best-case”) scenario and one with the actual prosthetic device control scheme. Even with the best-case scenario control scheme, subjects were unable to complete half of the tasks due to the aforementioned issues. A comparison of performance in VR and RW with the same control scheme would provide more insight into what types of tasks prosthetists could have amputees practice virtually. The ability to virtually practice could help amputees feel comfortable with their devices’ control mechanisms and open the door for completely virtual training sessions.


Portalco delivers an alternate method of delivery for any VR or non-VR application through advanced interactive (up-to 360-degree) projection displays. Our innovative Portal range include immersive development environments ready to integrate with any organisational, experiential or experimental requirement. The Portal Play platform is the first ready-to-go projection platform of it’s type and is designed specifically to enable mass adoption for users to access, benefit and evolve from immersive projection technologies & shared immersive rooms.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

LET'S COLLAB

Join The Co.llaborator Program Today!

The Program is all set to develop the product with your collab in a manner
which creates a win-win situation for both the parties.