Enhancing Hybrid Learning

Thacher Shields | Nick DeBakey | Danait Teklemariam | Vianca Barlis

 

Brief

This is a design project I lead during my time in the UNC Charlotte Graduate School while earning my Master’s Degree in Information Technology. We were tasked with designing a new technology product to improve the “hybrid learning” experience that took hold during the COVID 19 pandemic. In the hybrid learning environment, half of the class was seated in the classroom while the other half was working from home over Zoom. The primary pain point we identified in our research was that the in-person students felt that their remote counterparts were not paying attention or contributing the same amount of effort as the in-person students were. Conversely, the remote students felt that their contributions were underrecognized by the in-person students. To solve this paint point, we designed a novel video conferencing and whiteboarding system that utilized a 3D-tracked virtual avatar to facilitate visual feedback that communicates focus and attention without requiring that the remote students have their camera transmitting their actual image, protecting their privacy and increasing convenience. The idea was well-received and earned us an ‘A’ on the project.

 

Design Challenge

The design challenge we have selected is the Burson Hall physical/virtual hybrid classroom renovation. We have developed two design goals to improve upon the current structure that will guide the renovation process. In addition, we aim to improve the collaboration process between virtual and in-person students by increasing interaction capabilities across in-person and remote students and mitigating common privacy concerns. Over the course of the past seven weeks, we have refined our design goals to better address the problems discovered in our user research.

The first design goal is to increase interaction and engagement from remote students. Our research found that remote students often choose not to interact with class when they can get away with it. Instead, they engage with distractions like social media. One common way that educators mitigate this distraction is by requiring students to turn on their webcams to be monitored. Unfortunately, not all students have access to a private office/study space. A common theme among students surveyed was that they did not feel comfortable being on camera because they did not like sharing their personal spaces with other students and their instructors. We built our Allison persona (below) to reflect the frustrations felt by remote students who do not like being on camera. To smooth this pain point, we are implementing a technology in line with Apple’s Memoji system that will allow remote students to have their facial expressions and movements tracked by their webcam and applied to a virtual 3D cartoon avatar that the other students can see.

The second design goal is to provide a greater number of interaction modalities and collaboration tools to in-person students to communicate complex ideas to remote students better and more easily. Our focus groups discovered that academically-oriented and highly motivated students often feel frustrated when working with remote students because they feel they cannot adequately communicate with them. We used these frustrations to develop our Steven persona (below). Because the exchange of ideas is an integral part of the collaborative work process, we have prioritized this goal. Our solution to the problem is to offer new tools with higher levels of interaction to in-person students. These new tools will enable them to leverage modern technologies to better work with remote students. The two technologies we propose in this application are remote presence video monitors and large-format touch screen tables. In-person students will see and converse with their remote student teammates using the remote presence video display. When they need to express complex ideas, they can use the large-format touch screen table to draw diagrams and models that the remote students can see.

 

Usability Issues

In our focus groups and user studies, we found common usability issues that were shared among many participants. Interestingly, once the focus group participants began to converse with one another, they unanimously agreed upon three common usability issues with the currently implemented technologies.

One of the most echoed sentiments in our user studies and focus groups was that remote users did not like being on camera for class. They remarked that it made them feel uncomfortable and required too much preparation for each class. On the other hand, remote students surveyed commented that they greatly preferred attending remote classes that did not require video participation. Another commonly remarked usability issue is that in-person students felt they had trouble communicating with remote students using existing technologies like Zoom. It is often the case that audio/visual communication platforms like Zoom do not adequately allow users to express complex ideas. Our users remarked that these limitations frequently become frustrating.

Lastly, users surveyed commented that they wanted to avoid added complexity in their workflows. They commented that they felt that many of their instructors were not adequately trained on the technologies and systems used in their classes, and they feared that our design would add to that complexity and increase pain points. For our design to enhance the experience, it must be easily integrated into existing workflows and curriculums and not require extensive training and instruction to use.

 

Design Task Goals

We have developed two specific task goals to help inform our design. These task goals define big-picture user flows that can be performed using the system. They have been developed using feedback from our focus groups and user studies.

Our first task goal is to select a virtual avatar and set up face tracking. Remote students should be able to easily select a virtual avatar to represent them in class. This virtual avatar should mimic their exact facial expressions, mouth movements, and head position in 3D space. This new form of virtual presence sits to the left of the uncanny valley. However, it has a high enough level of fidelity to enhance the interactivity beyond the levels present in voice chat alone.

The second task goal is to draw a diagram of the networked devices in your home. This task is challenging to perform through voice and video alone, but a medium like Jamboard, FigJam, or Visio can make this a trivial task. Users will use a physical pen-like device to draw diagrams on their work surface that the remote students can see.

 


Personas and Stories

Following our design goals and aim of working on our design challenges, we have followed a step of need finding. Need finding was a process we took to further understand our users’ needs. We could get participants’ reactions to our design by conducting an interview. Our intended audiences are students and educators. We classified students as those taking classes remote, in-person, or hybrid. On the educator’s side, there are professors and teaching assistants. We conducted two focus groups during this process, each with four participants in a group. Our team has split two people per focus group. Our first focus group performed interviews with one full-time, and one part-time Computer Science student, a Linguistics student, and a Criminal Justice student at UNC Charlotte. The other focus group was comprised of a transfer student in Graphic Design, a Google certificate student, two Nursing students from UNCC, and one community college student. This diverse group of students helped to cover many pain points across multiple majors and levels of education.

 

Personas

Interviewing our participants was the next step we took as a team. The guidelines we used during this time were interviewing participants using a zoom meeting by recording them to help transcript them. After conducting these interviews, we reviewed the videos and transcribed them into notes. We focused on the common trends, themes, similarities, and differences among their responses depending on their learning method. In order to further assist our participants, we have developed personas to create reliable and realistic information for our audience. This is just a representation our team used from audience responses to the interview, meaning this does not represent everyone’s experience. We followed by developing two personas representing the common feedback and responses we have gathered from our focus groups during our interview.

The first persona we developed is about Steven Baker, a student at UNCC who attends classes in-person and is a Computer Science major. Steven is very invested in his classes and holds himself to a high academic standard. The primary frustration to Steven is the issue of communicating complex ideas to his remote group mates. More detailed information is shown in Figure 1 (below).

 

persona 1

Figure 1: Steven Baker Persona

 

From what Steven’s persona shows, he is a perfect example of what our design concept is working on to solve. He fits both sections of our intended user being a student and also educator section since he is a teaching assistant. His primary concern is that, since classes have switched to a hybrid model with half the class seated and half remote, he finds that he struggles to express and communicate high-concept ideas to his remote group mates using existing channels and media.

Steven is a perfect candidate for our design because he struggles with existing solutions’ limitations placed on him. Using new interaction modalities and channels will dramatically improve Steven’s ability to communicate with his remote teammates. In addition, the ability to draw diagrams, share images, and highlight documents will help address Steven’s issues.

Our second persona is Allison Davis. She is a single mom who works from home to take care of her family. As a result, she attends class exclusively online. One of Allison’s primary frustrations is that she does not have her own private space in which to attend class. She worries about her privacy when courses require webcam-enabled participation. She also worries about privacy and does not want her instructors and classmates to see into her home.

 

persona 2

Figure 2: Allison Davis Persona

 

One of our design goals is to provide remote users with confidence when appearing on camera. Allison is a perfect fit to take advantage of our design. She will continue taking classes remotely and still have the confidence that her instructors and classmates will not see into her home and make any judgments about her because a 3D virtual avatar will represent her. This is different from existing technologies like Zoom that can remove the background because our design will let users remove their physical presence altogether. By using a virtual avatar to represent herself, Allison will not have to worry about taking more time out of her already busy schedule to alter her appearance in any way to go to class online. All the while, she can use facial expressions and movements to have a more substantial presence in the classroom remotely.

 

Storyboards

After gathering information from participants, followed by developing a persona, we developed storyboards that fit our personas. There are two storyboards, each with six storyboard panels telling stories of our personas precisely and the frustrations they have.

Our first storyboard shown in Figure 3 describes the personality of Allison and tells a story of how our design will help her. The first panel of the storyboard shows a clock displaying 8:00 AM. Allison has class at 8:15 and does not have time to get ready. So, instead of appearing on camera, Allison chooses to be represented by a 3D virtual avatar. This will help alleviate some of her privacy concerns. She can choose from several avatars, but she picks one that she finds fun and approachable, Petunia the Pig, and joins the class by pressing join. The system tracks her face, and the Petunia avatar mirrors her expressions and movements. This high-definition face tracking helps give her a more substantial and more engaging presence in the classroom. After completing these steps, she can join the class and be represented by her choice of avatar. This ensures that her privacy is protected while represented visually in class.

 

Allison storyboard

Figure 3: Allison Storyboard

 

Figure 4 displays a storyboard for our Steven persona. The first panel shows Steven in the classroom and his remote group mate. They are discussing how to build an algorithm for a class project. The next panel shows Steven struggling to explain a complex idea to his remote group mate. Steven then resolves to use the interactive work surface to draw a diagram to help express his idea. The remote student can then understand the idea because of this added medium of communication. In panel five, Steven’s remote group mate understands the idea and feels confident about the project. Finally, in panel six, both group mates agreed to their algorithm plan because they could effectively share their ideas.

 

Steven storyboard

Figure 4: Steven Storyboard


Interaction Design

Prototype (Remote App)

We have created two prototypes that both in-person and virtual students will interact with. With our prototype design, we want to encourage elaborate collaborations for all hybrid students to enhance their learning. First, we have the remote student viewing the classroom through their laptop but still can provide their virtual presence using their webcams or an avatar that tracks their facial movements. The first user journey will walk through the prototype as a remote student.

After logging into the class, the remote student can use their web camera or select a Memoji avatar (Figure 5). On the Avatar Selection page, we want to provide options to select avatars or Memojis to help provide some privacy to virtual students while still allowing themselves to be visible in class. Here, we have affordances such as the avatar selection options, the button for the right arrow for selection, and the green button. The signifiers are the icon and words for “Use web camera,” green circle around the selection of an avatar, the right arrow button, and “Next” on the green button. Once the user lands on this page, they can navigate through the avatar options, select the one they want to use during class, and click Next.

 

Avatar Selection screen

Figure 5: Avatar Selection screen: remote student selects an avatar instead of their web camera

 

On the Welcome page, after selecting avatars, the remote student prepares to come into the hybrid class (Figure 6). The system will track the user’s face to mirror their facial expressions to show to others. Here, the remote user can preview their webcam or avatar face tracking, change their avatar option, and set up their audio settings. The affordances include the avatar, blue and green buttons, and drop-down options. The signifiers that aid these affordances are the phrases and icons such as “Tracking your face..,” “Change avatar,” “Join Class,” and the video, audio, and speaker icons. Once the remote student is satisfied with their settings and ready to come into class, they can click the Join Class button.

 

Welcome screen

Figure 6: Welcome Page: Avatar face recognition is in progress while remote student prepares to join class

 

When the remote student has successfully joined the class, they can see its full view and group members (Figure 7). In addition, the remote student can interact with the instructor and classmates more effectively as they have a remote presence in class which will be shown in the in-person prototype (Figure 9). The other remote students’ avatars are shown on the left panel, along with the user’s. The center panel consists of the group viewing camera and the surface table’s screen sharing. The camera view will automatically show the professor when the professor is lecturing, and no group collaboration happens. The screen sharing will be on the main board of the class that the professor is writing on. Finally, the right panel is for the group chat.

On this page, we have affordances such as the remote students’ avatars, camera and screen sharing views, and the chat box on the right panel. The signifiers include the mic and video icons on the user’s avatar indicating that they can control their presence in class, the “Type in chat…” phrase, and send icon in the chat.

 

Welcome screen

Figure 7: Omni View: the remote student can view the lectures and collaborations, and connect with others

 

When the remote student is in the Omni view, they can also interact with the class and screen sharing views by clicking on them to make them larger, as shown in Figure 8. Again, the remote student has the option to focus on their viewing choice. Like the Omni view, we have the affordances such as the remote students’ avatars, viewing focus either the class camera or work surface, and the chat box on the bottom. The signifiers include the remote students’ names and the minimize icon on the top right of the video and work surface focus. When users click on this minimize icon, they are taken back to the Omni view.

 

video focus

Figure 8.1: Video focus

 

work surface focus

Figure 8.2: Work surface focus

 

Prototype (in-person app)

The second of the two prototypes is for the in-person students within the hybrid classroom. This prototype is for the interactive wall and table devices to aid with connecting with remote students. The interactive wall is reserved for the remote students’ video feed or avatar, while the interactive table is for collaboration between in-person students and remote students.

While the students are sitting at their interactive tables, the interactive wall will be beside them, displaying the remote user’s avatars or video feeds. Using the interactive wall to show remote students’ virtual presence provides in-person students assurance that their classmate is contributing to the group discussions and collaborations. This interactive wall will also be beneficial when the group is having a presentation and discussing side-by-side. This interactive wall will have a few buttons with which the in-person students can interact. These buttons are located at the bottom of the wall screen and have basic functionalities such as ending the call, muting microphones, turning on video feeds, and more (Figure 9). The signifier is that these buttons contain icons that most individuals would recognize. In addition to this, all of these buttons have their circular color background, while on a dock of a different color, it signifies that they are buttons and can be interacted with. This helps aid our second design goal and potentially helps satisfy remote students who do not wish to be on camera but can still appear active and attentive to the in-person students.

 

work surface focus

Figure 9: Interactive wall

 

When the in-person students are seated at the table, the surface will be in its resting state, with all applications closed and a UNCC logo displayed as a wallpaper (Figure 10). Students here can see which applications are available for use; sticky notes, Google, various Google services, Figma, YouTube, Adobe XD, Jamboard. In addition to this various software and websites, there is also a button to enable screen sharing, an ellipse for additional content, and a power button. The table can have any software/website on it; we chose these to help reinforce the idea of collaboration between multiple users in-person and remotely for design purposes. This helps aid our design goal to help keep users working together and not become distracted as all the in-person students will be working on one surface with the remote users.

 

Interactive surface table

Figure 10: Interactive Surface Table - Home

 

In this example, the Google Jamboard website has been opened by tapping on its icon (Figure 11.1). From here, the in-person students can utilize the stylus pen to jot down some brainstorming ideas on the application (Figure 11.2). The remote student can view the surface table’s screen and also contribute to the group work simultaneously. When an application is opened, the software is placed as the main view on the entire table, with the dock remaining at the top of the screen.

 

Interactive surface table

Figure 11.1: In-person student opens Jamboard on the surface table

 

Interactive surface table

Figure 11.2: Jamboard with collaborative work

 

The goal to make collaboration between in-person and remote students seamless can be achieved using interactive walls and work surface tables in class. In addition, the affordances and signifiers help the design prototype become more user-friendly.


Evaluation

User Studies

In order to evaluate the design and prototype, we conducted user studies. Our user study has three goals: derive actionable items from improving interaction using user feedback, determine the effectiveness of current design, and gain insight into user confidence into interaction flows and information design.

To obtain participants for our user study, we reached out to individuals we know in our personal lives. However, there was a focus on finding individuals who have experience with schooling that is hybrid/remote. Therefore, the study was conducted in-person with the individuals using a laptop for the remote student portion and a laptop and iPad for the in-person student portion. Before the participant was able to engage with the system, the individual hosting the study would read an introduction explaining the purpose of the study, followed by instructions for the participant to partake in the think-aloud method, with the format of the study with tasks and follow up questions. After this was read to the participant, they began with the tasks; they would think aloud as the host timed their completion time for the tasks. Once all four tasks were complete, the host would follow up with the post-task questions to help obtain more information to finish the study. The post-task questions are as follows:

  1. Which of the tasks did you find to be the most challenging?
  2. What changes, if any, would you make to the design? Why?
  3. Would this system improve your hybrid learning experience? Why?
  4. If a system like this was offered, would you be more inclined to take hybrid courses in the future?
  5. Would a system like this give you more or less confidence when working in a hybrid environment?

Our user studies included five participants; of these five participants, we aligned them with our personas, two of them aligned more with Allison, while three of them aligned more with the Steven persona. Our participants included a UNCC alum, a UNCC Graduate student, a UNCC Nursing student, an Online Google Certificate Student for I.T., and a Journalism major.

Our prototype has two components: an in-person and a remote student component; we created two tasks for each learning style. The tasks for the remote and in-person student are as follows:

  1. As a remote student, once you are in the omni view of the class, change your avatar to a different one.
  2. As a remote student, focus on the camera viewing of the instructor.
  3. As an in-person student, draw a diagram using the work surface to share with your virtual group mate.
  4. As an in-person student, while working on the Jamboard collaboration on the surface table, open YouTube and search and play “Everything You Need To Know - Adobe XC Update (2022).”

We collected data from our user studies in many aspects, the think-aloud information, post-task question feedback, and quantitative data of task completion time. Reflecting on the think-aloud feedback, most of the participants enjoyed the simple, clean look of the interfaces. Specifically, within the remote application, there was a significant appreciation of the avatar option to represent themselves in class instead of a webcam/camera option. Some participants were not in favor of the focus viewing of the surface table and class camera. For the in-person application, many liked the collaborative experience. They thought it would help with hybrid learning, and some mentioned that the icons on the interactive wall could be improved and labeled. As far as post-task feedback is concerned, all participants explained there were no challenges they faced when completing the tasks. Some minor improvements were suggested, such as the ability to resize the viewing of the remote student more freely and have the video/audio controls visible consistently on all screen options. Lastly, most participants exclaimed that the system has great potential to increase learning opportunities for both in-person and remote students.

 

Interactive surface table

Table 1

 

Interactive surface table

Figure 12

 

The quantitative data we collected is displayed above in the table, and an aggregation of that data is displayed in Figure 12. We timed all participants with how long it took them to complete the task at hand for each task and asked them for a rating on a scale of 1-5 for each of the systems. All but one of our participants had similar times in completing the tasks; the Google Certificate Student appears to be an outlier for our data as they had very large task completion times. The other participants all had very similar times. However, looking at this information with the Google Certificate Student, we can see areas for improvement, such as opening a YouTube video. That task took all of the participants roughly the same amount of time, which was much longer than all previous tasks. In terms of the ratings, all of the participants were nearly in agreement with the systems, and there was only a one-point difference between some of the participants for both the in-person and remote systems, with the in-person system getting three 5’s and two 4’s and the remote system getting four 4’s, and one 3.

Results

After collecting and analyzing all of the data, we have concluded that our prototype is a great first step in the right direction towards creating a system that will help enhance hybrid learning. Some aspects of the system still require adjustments, which was expected. Overall, the participants enjoyed the systems; a higher rating was given to the in-person system than the remote system. The remote system requires more flexibility, ease of use, and buttons with more vital signifiers. The in-person system also requires more potent signifiers on the buttons for the interactive wall component of the design. By recording the task completion times and analyzing the aggregated data, there should be adjustments to utilizing the applications on the interactive table to a more streamlined process. The participants thoroughly enjoyed the idea of having a representative avatar that would be on display rather than their webcam/camera live feed.

Recommendations

After conducting the user studies, collecting all of the data, evaluating it, and analyzing it, we concluded that there are a few noticeable improvements to the prototype/system itself. Changing views should be more readily visible within the remote app and have consistent controls. One way to achieve this would be to have the minimize buttons with a white background with a drop shadow to help make it more visible to the users. Additionally, the ability to freely resize the viewing of the surface table and class camera should be given to the users. The ability to interact with the work surface while in the Omni view should also be implemented. Within the in-person app and the remote app, the ability to maximize the work surface to be full-screen should be implemented and have optional light/dark modes.


Appendix

Think Aloud

Participant A: Online Google Certificate Student

Remote: “Okay, after logging in as a remote student, I land on the page where I get to choose to use my camera to show my face, or maybe I will use an avatar… I can see that there’s an arrow to look for other avatar options but for the purpose of this prototype, I will continue with the already picked avatar. Click Next. Welcome, Allison, I like that. So now I think I’m waiting for the system to detect my face? I can change my avatar again and I see options to change my devices for the webcam and microphone. I’m ready so I will join the class. Oh wow, a lot to see in class. I like that I can see the other remote students and the view of the class. I think the chat box is also nice to have so that I can type in questions if someone is still talking. Now for the first task, I have to go back and change my avatar again. Hmm, there’s no button here on the Omni view. Maybe it’s on my profile [avatar video and audio control box]. Yes, it is. It takes me back to the screen where it says, Welcome Allison. Then I will click on the Change Avatar button. I’m back on the very first page to choose an avatar option. For the second task, I go click Next after choosing an avatar, then Join Class. I see the Omni view… I need to focus the camera viewing on the instructor. Okay… I assume to make the viewing bigger, I have to click on the viewing option. Click on the class view. Nice, I was able to get it.”

In-Person: “Now as an in-person student. After logging in to the surface table, I see the UNCC logo. Hmm, what’s an app that I can draw in? I will choose Jamboard but this is my first time with it. Using the physical marker provided to me, I will draw using it. I drew a diagram. I think the screen is automatically shared with the remote student. So, I’m done with this task. For the last task… I will open YouTube to play a video. I see it pulls up the app on top of the Jamboard. That’s nice to make it better for multitasking.”

 

Participant B: Nursing Student

Remote: OH, I love to join my classes with an avatar. Let’s see what I need to do first. I see I need to choose my avatar and hmmm ok I will choose this one. Ok pressing next, I guess it is tracing my face like the new bitmoji, nice I am loving this. Great, it is done pressing and I can join the class now. I see the class and it feels like I am physically in class. How cool is this! And of course, I see other cool avatars of my classmates. It is interesting how I can choose to see different views of the classroom. Choosing from a shared screen and classroom let me choose this one so I can see what is being shared. The first task said to go back and change the avatar ok let me click on my profile and I am clicking on the audio and video control box. It took me to the welcome screen, and I saw where I could change my avatar. Ok and I am taking the same steps I took earlier, and it works. Great, this finishes my first task on to the next one. The second task is to focus the view on the instructor, and I see everything the instructor is doing. This is nice and I think this completes the second task.

In-person: For this task, I am using this table surface. It’s nice to share screens with teammates. I think it makes things easier than having individual laptops. First, I need to log in with credentials. I love the UNCC logo by the way. Ok, I see the whiteboard and all the applications on top I can use. I am assuming this is the icon for the online student. The third task is to draw. I wonder which application I need to choose. I guess it is not specified I choose Jamboard and I will use the marker that’s on the top right side of the surface table. Then I am drawing a random diagram. Moving to the next task, I am clicking on the YouTube icon alongside my diagram on Jamboard. It opens next to it, and I am searching for “Everything You Need To Know – Adobe XD Update (2022)” and here it is. I am now pressing play. Perfect I guess this ends my task and I love it.

 

Participant C: UNCC Alum

Remote: “Ok. So I’m just supposed to tap through this and let you know when I have questions? Gotcha. So I see here that I can select an avatar. I am tapping on the- oh – it looks like I can’t pick that one. Ok got it. So I’m going to select the boy avatar and click join class. Now I see… it looks like it’s showing me my face? Oh ok I see now so this is me. And then… It looks like I can change my microphone. Ok I’m joining the class. Ok so I see what looks like my other classmates, the classroom, and a whiteboard. And this is the chat. So you want me to focus the camera on the class view? Got it! So the next task is… change my avatar… So I don’t see an option for that from here. How do I go back to the other screen? Is it this? Got it. Ok so I need to change my avatar. Maybe if I click on the avatar? Oh! Ok so now I see a changed avatar. Great!”

In Person: “I see what looks like an upside down iPad. My task is to… draw a diagram on the table. Alright so I’m going to open Word. Oh so I guess I can’t open that… But it looks like this ‘d’ on the end is flashing. So I’m tapping the ‘d’. Now I see a whiteboard and a pen. I’m tapping the draw icon- oh it looks like it wants me to tap the other pen on the right. Ok! So it looks like I’ve drawn my diagram! Ok next. I need to search for a YouTube Video. I see a YouTube icon in the dock. Is that what that’s called? Ok I’m tapping the YouTube Icon. So I see it has taken me to the YouTube homepage. Now I’m going to tap the search bar. Ok I see a keyboard. I’m going to type the name- oh it looks like it filled automatically. Alright, task complete!”

 

Participant D: Undergrad Student

Remote: “Ok so first I’m going to choose my avatar. I picked this one. And I’m licking next. And uh I see it says tracking face. And um. To join class, I’m going to click the green button that says join class. Ok so I’m in the class and I can see the avatars of my classmates and myself. Um. As well as the um classroom video and a shared screen. So if I want to focus on the classroom, I would click on the classroom screen. And that made the classroom screen larger. Ok so to go back I’m going to click the little button in the right hand corner of the classroom screen to minimize that. And to change my avatar… I’m gonna click on my avatar again. And it takes me back to the welcome page where I have an option to change my avatar. So I’m going to click that. And I’m going to pick this other avatar over here. In Person Uh I’m going to click this option. Which seems to have been wrong so I’m going to guess it’s this one. The D at the top of the screen. And so… To draw, I’m going to click the pen. Alright so I’m going to go to the top bar and select the um YouTube icon. And then that takes me straight to YouTube. So now I’m going to click on the search bar and um ok I see the keyboard. I’m going to type in my- oh. So I guess I’m done? I saw it went ahead and searched for me.”

 

Participant E: Journalism Major

Remote: “Um… okay, I’m logging in as a remote student and now um I am picking a camera or an avatar. Okay so I guess I’m just using this avatar that’s picked. Okay um.. Now I’m going into the class. There’s a lot of windows. Okay so the first task is to change my avatar which I think I can do it like this. Okay. Yeah, okay that’s how. Okay so now the second task. That’s to change the view. Um. I think I can do it like this? Hm… Oh, okay I got that. I’m thinking that I like the gallery view of the students and that I’m separate from them. And I like that I can see all of my other classmates rather than just my instructor. And I like this. I believe this is like a, like a drawing area or like an interactive something.”

In-Person: “Okay, so this is in-person? Okay. This is pretty simple, there’s not a lot going on so I don’t really know what to say. The first task is to draw something okay. Let me try these up here. Um.. okay that one worked. Okay. Did that. Now to look up that YouTube video. So I’m looking that up, and oh, okay. It typed it for me, okay there’s the video. Cool. This makes me feel like, it makes me feel like I’m involved more in my class.”