International Workshop on Cross-Reality (XR) Interaction @ ACM ISS 2020

You Tube stream link

We are organising the first international workshop on Cross-Reality (XR) Interaction. XR describes the transition between or concurrent usage of multiple systems on the reality-virtuality continuum. While some expect the distinctions between Augmented Reality (AR) and Virtual Reality (VR) to fade away in time, it is still helpful to see them positioned in a Milgram’s Reality-Virtuality continuum of 'realities' with various degrees of virtual content. Being a continuum, it is possible to envision (i) a smooth transition between systems using different degrees of virtuality or (ii) collaboration between users using different systems with different degrees of virtuality. This workshop aims to bring together researchers and practitioners that are interested in XR to identify current issues and future directions of research while the long-term goal is to create a strong interdisciplinary research community and foster future development of the discipline and collaborations.

Call for Participation

You are invited to participate in the first international workshop on Cross-Reality (XR) Interaction that describes the usage and transition between different systems on the reality-virtuality continuum (e.g. from reality to augmented reality (AR) and virtual reality (VR)) or concurrent usage of multiple systems on this continuum for collaborative purposes. The aim of the workshop is to bring together a community of researchers, designers and practitioners who are interested in `cross-reality' (XR) interaction from an academic or practical perspective. We invite you to make submissions relating to the workshop topic or any of the themes of the workshop:

  • XR conceptual models and design principles
  • Interaction, transition, and visualization or perception techniques
  • Real-world tool use and tangibles as input to XR systems
  • Collaboration across the reality-virtuality continuum
  • Input and output devices
  • Use cases in, e.g., education, industry, transportation, sports, healthcare
  • Evaluation of XR experiences
  • Privacy and security in XR
  • Any other XR related topic

Prospective participants should submit a 2 to 4 page position or work-in-progress paper in the ACM Master Article Submission Template (single-column, references do not count towards page limit) describing their interest and/orprevious, current, or future work related to the workshop topic. Submissions are managed in the EasyChair submission system.

The deadline for submissions is 7 October 2020. These will be juried by an expert committee who will select participants based on the relevance and quality of the work on the broad domain of XR interaction. Each participantwill have five minutes to present their position/work at the workshop, and at least one author of each accepted paper must attend the workshop and be registered for at least one day of the conference.

The proceedings of the workshop will be published on the website and in an online repository that is indexed by Google Scholar (e.g. HAL (https://hal.archives-ouvertes.fr/), arXiv.org, CEUR-WS.org).

The workshop will be organised online due to the pandemic. We will do our best to accommodate participants in disparate time-zones.

Submissions and questions should be emailed to augusto.esteves@tecnico.ulisboa.pt.

Submission and dates

Programme

Our half-day workshop will take place exclusively online via Mozilla Hubs, an increasingly popular 3D collaboration platform that is accessible to both VR and standard browser-based users. Group activities will be supported via Miro, a collaborative online whiteboard platform designed for remote teams.

The current programme is available below:

  • 13:00 Introduction and presentation of the programme
  • 13:05 Presentation of papers (5 minutes presentations + 4 minutes Q&A for each paper)
    Take a look at the selected papers and abstracts.
  • 14:30 Break
  • 14:35 Keynote speech: Anasol Peña Rios
  • 15:25 Break
  • 15:30 Introduction to group activities
  • 15:35 Group activities
  • 16:05 Presentation of activities
  • 16:25 Closing session
  • 16:30 Social gathering with DC participants

Keynote: Dr Anasol Peña-Rios

Anasol Peña-Rios is a Senior Researcher at BT Research Labs in Adastral Park, Ipswich, where she specialises on AI and Immersive Technologies (VR/AR/MR). In addition, she is a Visiting Fellow at the University of Essex (UK), from where she holds a PhD in Computer Science and an MSc in Advanced Web Engineering. Previously, she completed a BEng in Information Technology at the Instituto Politécnico Nacional (Mexico).

Anasol has been personally driving the agenda on exploring the use of AI, digital twins and immersive technologies in the context of BT´s field force operations. Her work contributed to BT´s portfolio who was presented with a Global Telecoms Business Innovation Award (2017), and an IEEE Outstanding Organisation Award (2017). In addition, her project was highly commended at the IET Innovation Awards 2018. She was awarded Best KTP Associate 2016 by the University of Essex Research and Enterprise office and was shortlisted for the Computing Women in IT Excellence Awards 2018.

Anasol is an IEEE Senior Member, Board Member and co-founder of the Immersive Learning Research Network, and Board Member of the Creative Science Foundation. She serves as Associate Editor of EAI Endorsed Transactions on e-Learning and as a Review Editor of Frontiers’ Human-Media Interaction theme. Her other professional contributions include numerous peer-reviewed publications and editorial work, in addition, to serve as chair and co-organiser of numerous academic conferences. She has more than 15 years’ professional experience in industry, working in close collaboration with international multidisciplinary teams.

LinkedIn: https://www.linkedin.com/in/prlosana/
Twitter: https://twitter.com/prlosana
Website: www.prlosana.com

Post-workshop plans

You can join THE EURO XR SLACK​ (XR as eXtended Reality) to stay in touch.

We plan to leave this web site up and running and have the papers available for interested stakeholders. The web site will also serve as the entrance point for the future activities presented below.

We are planning to submit a report article about the workshop to the relevant journal (e.g. IEEE Access, Interact). Depending on the state of research presented in the accepted papers we will plan to hold an open call in a special issue of a relevant journal as well.

With one of the workshop aims being to establish future directions of the XR interaction, we will also hold a discussion for possible ways of strengthening the XR community and further promote the field through various project proposals such as the European Research Council calls Marie Skłodowska-Curie Innovative Training Networks and Cost action. The workshop will thus also serve as a platform for future collaborations between participants.

Proceedings

The Proceedings of the conference is available at CEUR Worshop proceedigns
Vol-2779
urn:nbn:de:0074-2779-6


Cross-Virtuality Visualization, Interaction and Collaboration
Andreas Riegler, Christoph Anthes, Hans-Christian Jetter, Christoph Heinzl, Clemens Holzmann, Herbert Jodlbauer, Manuel Brunner, Stefan Auer, Judith Friedl, Bernhard Fröhler, Christina Leitner, Fabian Pointecker, Daniel Schwajda and Shailesh Tripathi
PDF | Abstract | Video presentation | Discussion on the workshop
In X-Pro, we investigate novel user-centric methods and techniques for cross-virtuality analytics. Cross-virtuality analytics in our sense aims to enable a seamless integration and transition between conventional 2D visualization, augmented reality and virtual reality in order to provide users with optimal visual and algorithmic support with maximum cognitive and perceptual suitability, depending on their current tasks and needs in the analysis process.
We thus focus on methods and techniques mainly for production data, which promise a new quality of visual analytics along the reality-virtuality-continuum in order to facilitate a completely different level of visual and spatial perception as compared to the state of the art. Aside the conception and development of novel visualization techniques, we also concentrate on a close collaboration and interaction of users within this continuum. Regarding analyzing, exploring and modeling of data, evaluations of trends, the detection of patterns and outliers as well as correlations in the data will be of utmost importance. We target to investigate concepts of novel visual metaphors, novel interaction concepts, their mathematical foundations, and evaluate them in terms of their technical feasibility, their cognitive, perceptional and ergonomic usability.
We believe that cross-virtuality analytics has the potential to fundamentally improve data-driven planning, control, optimization and quality assurance.

Participant_01:
Your system deals with transition between different virtualities. But how do you envision collaborative immersive analytics where collaborators use different systems on the virtuality continuum? This is particularly challenging since everyone might have a different view to the data visualised. For example one person watches the 2D screen, another one is in the virtual space with 3D data flowing around and so on.

Participant_04:
Visual awareness cues will be one deciding factor when implementing such a system. We envision collaborative immersive analytics across the RV-continuum using realtime non-obtrusive visual awareness cues which visualize the position, viewing direction, available interaction modalities etc. of multiple participants. Additionally, switching between different users, or moving from the current position to another user's position, and their viewpoints should be provided in such scenarios to facilitate communication and collaboration. One scenario might be: One person watches the 2D screen with 2D charts, and another one is in the virtual space wearing a HMD rendering 3D data. The virtual environment would then visualize both the other person (their position, viewing angle/direction) as well as a representation of the 2D screen. The person wearing the VR HMD could then "switch" to 2D mode by transitioning to the position and viewing direction of the other person; the data would be transformed into 2D space. Both persons would now be looking at the exact same representation of the data set.


Participant_05:
Curious about Cross Virtuality Analytics - Where is it more relevant in the Milgram Reality-Virtuality Continuum?

Participant_04:
We define "Cross-Virtuality Analytics" as new possibilities for interactive visual data preparation, modeling and analysis based on fluent transitions between novel visualization and interaction techniques across the entire spectrum of the reality-virtuality continuum. The focus on "virtuality" instead of "reality" in our definition of this term is based on the premise that early phases of the data analytics processes are best supported with technologies leaning to the virtuality-side of the reality-virtuality continuum. Nonetheless, smooth transition towards the reality-side of the continuum can play a critical role to integrate data analytics into real-world spatial settings and social environments such as physical movement in a familiar work environment or face-to-face collaboration with co-located team members.


Participant_03:
In your point of view how difficult it is to integrate XR with existing display infrastructure such as public displays, mobile displays and workshop screens.

Participant_04:
We believe that existing infrastructure (e.g., tablets, large touch-enabled displays) can be integrated into XR environments assuming software-based visualizations. As long as the underlying operating system or software supports these visualizations and transitions, we would see no issues in combining them with AR and VR devices. From a technical viewpoint, a common communication protocol must be established across these infrastructure equipment.


Participant_12:
I had come across work on data "physicalization" in which visualizations of data were 3D printed. People appreciated the tactile feedback when manipulating the physicalization with their hands. Do you think that haptic feedback in AR/VR visualizations can also have a positive impact?

Participant_04:
Yes, physicalization is definitely worth exploring in the cross-virtuality context. We focus/start with virtual environments, but augmented reality or reality will also play a big part in data transformation, modeling and analysis. One scenario might be to initially model the data in virtual environment and later physically interact with the 3D-printed data without VR hardware (e.g., facilitate the physical transformation with digital overlays). Especially in early prototyping stages, these physicalizations can be very helpful to better grasp/understand the data.


Participant_07:
Could you maybe comment on the type of Data that you think will particularly benefit from this Cross-Reality Analytics approach and which type of data you think will still be "better" to be explored in isolation (or do you even think all types of data will benefit from Cross-Reality Analytics)?

Participant_04:
We focus on production data (materials) where expert users can zoom into fabrics and other materials, visualized in 3D in a virtual environment and can visually inspect defects, structures etc.
 However, cross-virtual analytics should not stop there. We see tremendous potential in collaborative work environments to connect team members from different locations and points across the RV continuum. Another scenario might be to connect designers or architects and XR technology will facilitate their ability to work together and find better solutions. Generally, we believe that data should be prepared and processed in a way that allows it to be analyzed without the need for cross-reality analytics, i.e. in isolated mode. However, we believe that the advancements in AR and VR technologies will eventually render isolated approaches inefficient.


Participant_13:
To follow up on Participant_05's question: the paper seems to imply that there is a seamless transition across the Milgram spectrum. But is this actually possible? Aren't users always at one point?

Participant_04:
Indeed, one user is at one point of the RV continuum. But that does not imply that all users have to be at that point of the RV continuum. Rather, we encourage multiple points to be used across the continuum for utilizing different technologies and their individual benefits when dealing with large amounts of data in their different stages (e.g., data preparation, data cleansing, data modeling, data analysis). Further, different users have different technologies at their disposal. For example, the lab equipment includes large wall-mounted touch-enabled screens and AR/VR hardware, while the home office is "only" equipped with a desktop screen and a mobile tablet. Team members should be able to fully collaborate across the RV continuum with the devices at their disposal, including existing infrastructure, as well as AR and VR equipment.

Participant_07:
Interesting question. Do imply that the "Continuum" is maybe not a continuum but a set of a few discrete points :slight_smile:? This would mean that you are always just in one. If it is actually a continuum means it could be that you are in between certain "realities".

Participant_15:
I think this paper "A Dose of Reality" https://doi.org/10.1145/2702123.2702382 makes good points on why it is a continuum and not discrete points.

Participant_13:
@Participant_07: That was exactly my question :wink: With current technology, I think you are always at one point of the spectrum. The question for today seems to be more about collaborative environments where different users are at different points

Participant_05:
@Participant_13: that was also my point.
There are opportunities to explore these transitions as Research Questions

Participant_13:
@Participant_05: Exactly right

Participant_07:
@Participant_13: Hmm interesting. I am not sure if we could say that with current tech you "are" always at one point of the spectrum. I think it highly depends on how we define "are" or "being".
@Participant_15: What argument does "A Dose of Reality" bring up that it is a continuum and not discrete points?

Participant_05:
This is interesting. I am reminded of an old Hypertext paper that might apply to this work @Participant_13

Participant_06:
That's true, even with a HMD you sometimes can actually see glimpses of the outside world from your peripheral vision

Participant_05:
regarding transitions @Participant_06: to with a HMD you can inject video from your surroundings and you can also get augmented virtual depictions of people next to the HMD wearer

Participant_16:
> @Participant_13: Hmm interesting. I a
> @Participant_15: What argument does "A Dose of Reality" bring up that it is a continuum and not discrete points?
This is particularly true of AR. For many applications the user is simply interacting with "real" content for the majority of the time. It really depends on what she/he is attending to, or thinking about. We have a journal pending publication on this topic, i.e., using gaze to explore these micro-transitions between real and ar content

Participant_15:
@Participant_07: I would say by showcasing a set of possibilities on how to blend the real world in/over the VR experience they make a point that it is possible. But yes the paper is generally rather of explorative nature and thus, making little general claims.

Participant_07:
@Participant_15: Indeed they show some really nice approaches how we can create this "mix" of realities. I am starting to wonder if we have an objective or even subjective measure of where on the continuum a certain interaction is happening. Is it actually presence or immersion ? I am not really sure since I can have a high level of presence while being really far towards the reality part of the spectrum. Is it actually counting voxels or pixels and where they originated, real - virtual ? This also feels kind of arbitrary. I feel that at the current state the designer/developer is just deciding herself/himself where on the continuum the current interaction is happening.

Participant_15:
@Participant_07: Absolutely right now its just arbitrary as you said. Very interesting question you are posing on how to measure it - thought about the same, to understand how well cross reality interaction/integration works ... so yes, we need a measurement to say how well it works - we should talk more :innocent:

Participant_07:
@Participant_15: I agree that's really interesting to explore. Maybe the lack of actual measures is an indicator that at the end we do have only three discrete states: Completely Real, Completely Virtual and all the mess in between that we can not really separate more :smile:

Participant_15:
@Participant_07: that would be quite sad if that would be the case. All of us want to build design spaces and make great measurements on all the cool ideas we have - as we want to publish more papers :smile:

Participant_07:
@Participant_15: True, that would be sad :D. Maybe it's just my pessimist perspective. Let's explore this!

Participant_05:
Interesting Question about the discrete continuum.I would argue that Milgram's continuum is discrete from an User Experience Standpoint but there can be 50 shades of XR.

Participant_13:
I think this is the essence of what @Participant_07 said

Participant_05:
> @Participant_15: I agree that's really interesting to explore. Maybe the lack of actual measures is an indicator that at the end we do have only three discrete states: Completely Real, Completely Virtual and all the mess in between that we can not really separate more :smile:
@Participant_07 I think you can still distinguish the degrees of UX according Quality

Participant_07:
@Participant_05: Yes, I think we can start to distinguish multiple dimensions such as UX, Immersion, Presence. But my main concern here is that they probably won't be a 1-to-1 mapping on the RV-Continuum. I think one nice "challenge" we can pose to people that argue they have a solution to this is: Can we actually quantify the difference between Augmented Reality and Augmented Virtuality interactions/applications :slight_smile: ? The closest I came to an answerer here was actually "counting" how many pixels/voxels come from the real world and how man from the virtual. But this seems as being the silliest approach :smile:

Participant_05:
and that there are clear differences between AR / MR / VR
Maybe we need a multidimensional model of UX?
That incorporates Presence, Agency and Real/Virtual interactions?

Participant_14:
@Participant_05: This might also beget the question of how well traditional UX evaluation methods carry over to AR/MR/VR settings

Participant_05:
Ah Ah, there is a different question there @Participant_14.

Participant_06:
@Participant_14 there is a PhD student who is working on that, Xuesong Zhang. She's working on Usability Evaluations in VR. Hopefully she will be able to publish some of her work soon

Participant_14:
@Participant_06 I would love to see it.
@Participant_06 My own interest is in applying experience sampling methods within these settings

Participant_13:
UX for XR is indeed tricky. Likely some traditional approaches translate easily but additions are needed. One challenge is that quite a bit of insights on this is in the grey literature. Not that much in traditional academic venues
There is a survey paper on some aspects:
Steven Vi, Tiago Silvia Da Silvia, and Frank Maurer: User Experience Guidelines for Designing HMD Extended Reality Applications in: the proceedings of Human-Computer Interaction – INTERACT 2019 17th IFIP TC 13 International Conference (INTERACT 2019), Paphos, Cyprus.

Participant_06:
@Participant_14 it might or might not appear at a VR-oriented conference soon(ish)

Participant_05:
IEEE VR2021 is just around the corner :wink: :slight_smile: :wink:



Sound 2121: Cross-Reality Transitions Between Real and Augmented Sound Landscapes
Jordan Aiko Deja, Nuwan Attygale, Klen Čopič Pucihar and Matjaž Kljun
PDF | Abstract | Video presentation | Discussion on the workshop
Music and sounds have always been an integral part of our society since the prehistoric times. However, music listening has moved from a social experience to a more personal one with audio systems such as speakers moving ever closer to our ear canal. Based on this pattern and various futuristic visions of how we are going to listen to music, one can start to imagine a situation where a chip in our auditory cortex will stream sounds directly to our brain. However, augmenting our hearing with objects around us capable of streaming sounds directly to our brains might exacerbate the already present problem of blocking the environment sounds and not paying attention to what is happening around us. In this positional paper we discuss not only "What is the future of music listening and how will we consume music a hundred years from now?" but also "How are we going to enable users to pay attention to their environment and enjoy music listening at the same time?" Put differently, how are we enabling users to constantly switch between the real and augmented sound landscapes and enable seamless cross-reality transitions as well as preserve a positive user experience.

Participant_01:
You are exploring sound only landscapes and transitioning between environment sounds and internally streamed sounds. Would this change if we add visual elements from AR to VR and transitions between these systems? For example in VR we might have sounds from the real environment, virtual environment and streamed sounds as well.

Participant_14:
As presented in the paper, sound itself can be a problem when environmental sound is overridden with sound transmitted over headphones. But in the paper we explore just the streamed sound and the real environment. By adding virtual objects, which can also produce sounds, in the environment, we just add to the problem as virtual objects will have a potential to fight for our attention (e.g. advertisements), which physical objects cannot do easily (e.g. repeat sounds, increase volume, jump in front of us). And then is virtual reality, which can expose the user to combined sounds produced by VR environment, the streamed sound (although this might be coupled with sound produced by VR environment) and sounds from the real environment. The sounds from both might “contradict” each other since the user in VR has no visual information to connect the sounds in real environment to the objects and persons in it, since they cannot see them. Studying sound in isolation might not be a good idea but it’s a good start and an important aspect. So by adding the virtuality might even exacerbate problems presented in the paper.



It Takes Two To Tango: Conflicts Between Users on the Reality-Virtuality Continuum and Their Bystanders
Jonas Auda, Uwe Gruenefeld and Sven Mayer
PDF | Abstract | Video presentation | Discussion on the workshop
Over the last years, Augmented and Virtual Reality technology became more immersive. However, when users immerse themselves in these digital realities, they detach from their real-world environments. This detachment creates conflicts that are problematic in public spaces such as planes but also private settings. Consequently, on the one hand, the detaching from the world creates an immerse experience for the user, and on the other hand, this creates a social conflict with bystanders. With this work, we highlight and categorize social conflicts caused by using immersive digital realities. We first present different social settings in which social conflicts arise and then provide an overview of investigated scenarios. Finally, we present research opportunities that help to address social conflicts between immersed users and bystanders.

Participant_03:
This creates an immersive experience for the user, and on the other hand, it creates a social conflict with bystanders.
If you want to make experiences less intrusive you need to show immersed users the outside world as well as show the local user the inside world. Do you have any idea of how much of this sharing needs to be done in order to avoid conflicts?

Participant_17:
I think this paper gives insights into that: https://dl.acm.org/doi/10.1145/3322276.3322334

H2: "Using higher details of the physical world in passers by visualization leads to a higher distraction from the played game" in the discussion seems relevant to this question.
The authors confirm this hypothesis through their research meaning that if you add more details from the outside world, the VR user gets more distracted.

For the other way around, for example, we could take a look at FrontFace (https://dl.acm.org/doi/epdf/10.1145/3098279.3098548).
Here the virtual environment is shown to the outside world via a front display on the HMD. Additionally, the user's eyes are overlaid there. That helps both sides to engage for example in a conversation.


Participant_03 (comment):
Bomb defusing game is a great example of bystander game.
https://keeptalkinggame.com/
Keep talking and no one will explode.

Participant_17:
Thank you for the link, I will look into that ;)


Participant_07:
Really cool work! I felt that there is currently a strong focus on exploring how exactly one non-HMD user can be included inside a AR/VR experience. Have you found some work during your literature review that explored the inclusion of multiple non-HMD people. I feel that the "power" dynamic changes here if the majority are non-HMD users. Maybe inclusion means now that the HMD user has to move on the continuum towards the majority :).

Participant_06:

Maybe this one? A Projection-Based Interface to Involve Semi-Immersed Users in Substitutional Realities by Zenner et al


Participant_17:

Jan Gugenheimer has done some work on that: e.g.: https://dl.acm.org/doi/10.1145/3025453.3025683

Participant_07:

@Participant_06: Thanks, I think there do exist some systems that allow for this. However, I wonder if someone actually explored the particular differences of inclusion between one HMD user and multiple non-HMD users. Are all our techniques going to scale :)?


Participant_07:

@Participant_17: Yeah so with the ShareVR system it was technically possible but we did not particularly go into depth how multiple people inclusion differs from single user inclusion :).


Participant_17:

Thank you Participant_07.

Participant_17:
https://dl.acm.org/doi/10.1145/3322276.3322334 This work is also quite interesting


Participant_08:

On the note of multiple people inclusion, there's this project called 'ReverseCAVE: CAVE-based visualisation methods of public VR towards shareable VR experience' https://www.youtube.com/watch?v=-_5pzwb3YvM

Participant_07:

@Participant_08: Yeah, ReverseCAVE is such cool work! It works for multiple observers but not sure if non-HMD users can actually interact with the HMD user.

@Participant_17: Thanks for the link. I didn't know that one. I have to check it out :)


Participant_08:

sadly I don't think so, but from the bystander's perspective it's very visual and quite attractive, so it gives a 'welcoming' vibe :)


Participant_07:

The closest to what I mean is maybe HaptikTurk (https://dl.acm.org/doi/10.1145/2556288.2557101). But there is not a really meaningful interaction between non-HMD and HMD users but there are multiple non-HMD users "interacting" with HMD users.


Participant_06:

@Participant_08: something to explore for your project!
 [Note: Check the paper titled "The Body in Cross-Reality: A Framework for Selective Augmented Reality Visualisation of Virtual Objects"]

Participant_08:

For sure! :)



Participant_11 (comment): 
It could be also the other way round: what kind of bystander information will you provide to the immersed user 




Outdoors Mobile Augmented Reality for Coastal Erosion Visualization Based on Geographical Data
Minas Katsiokalis, Lemonia Ragia and Katerina Mania
PDF | Abstract | Video presentation | Discussion on the workshop
This paper presents a Mobile Augmented Reality (MAR) system for coastal erosion visualization based on geographical data. The system is demonstrated at the beach of Georgioupoli in Chania, Crete Grece, in challenging, sunny, outdoors conditions. The main focus of this work is the 3D on-site visualization of the future state of the beach when the shoreline will inevitably progresses in-land based on the impact of severe coastal erosion, taking place across the Mediterranean sea but also worldwide. We feature two future scenarios in three locations of the beach. A 3D sea segment is matched to the user’s actual position. The visualization as seen through a smartphone’s screen presents an unprecedented seamless view of the 3D sea segment joined with the real-world edge of the sea, achieving accurate registration of the 3D segment with the real-world. Position tracking is performed by utilizing the phone’s GPS and the computer vision capabilities of the presented AR framework. A location-aware experience ensures that 3D rendering is space-aware and timely according the user’s position at the coast. By combining AR technologies with geo-spatial data we aim to motivate public awareness and action in relation to critical envi- ronmental phenomena such as coastal erosion.

Participant_03:
You specifically chose to use handheld AR systems to visualize coastal erosion with the goal to motivate public awareness on this important environmental problem. Do you think your approach could also be useful in context of other visualisations techniques: e.g. VR, 360 video ...?

Participant_02:
Of course the visualization of landscape changes would be useful using other techniques such as spatial AR, despite the fact that seems pretty difficult in an open area like a beach, other than this VR implementation seems promising in combination with 360 video. 360 video would visualize the future and users would be able to witness the visualization by using a VR system. This achieves both 1) distant witness of the visualization and 2) remaining of the 'reality-element" which is important for as.


Participant_03:
Building awareness is most effective if experiences can be shared. How would you envision doing this when various systems are used?

Participant_02:
Sharing the exact same experience would be challenging. We have to make compromises based on the systems that are used. The handheld AR can be upgraded to multi-devices experience and can be shared even with AR HMDs. A different version would support "remote visitors" of the beach in a multi-user VR experience as the one mentioned before.


Participant_11 (comment):

But maybe it is hard to access the beach for everybody ...

Participant_02:
In order to create an on-site outdoors AR experience, we had to make compromises...The limit access of users was one of them. One of the future upgrades is the addition of beaches and locations that will be available to the user.



Individual and Collaborative Cross-reality Immersive Analytics – Initial Ideas
Nanjia Wang, Frank Maurer and Amir Aminbeidokhti
PDF | Abstract | Video presentation | Discussion on the workshop
This paper presents research questions on supporting analytics work using cross reality (CR) and the initial idea of the investigation we want to conduct.

Participant_01:
You present 3 research questions in your paper. Do you have any hypothesis and studies in mind in order to answer them? RQ1: How can we support analytics work by moving visualizations across the Milgram continuum ? RQ2: What interactions do users naturally use for cross reality applications? RQ3: How does control over a shared reality impact team collaboration?

No written answer provided.



The Body in Cross-Reality: A Framework for Selective Augmented Reality Visualisation of Virtual Objects
Jihae Han, Robbe Cools and Adalberto Simeone
PDF | Abstract | Video presentation | Discussion on the workshop
The body plays a communicative function in interaction. It expresses how we respond, experience and interact with the world through action, movement, and gestures. In this paper, we investigate the impact of the body in Cross-Reality Interaction between users of different realities in the Reality-Virtuality continuum. We propose a Framework for Selective Augmented Reality Visualisation of Virtual Objects that enables an external Augmented Reality user to perceive an immersed Virtual Reality user against different levels of information. The augmented reality user may observe the real body of the user in the context of visualised objects from the virtual environment, selected according to three criteria: Proximity Threshold, Field of View, and Importance Ranking. We aim to investigate how much and what type of virtual objects need to be visualised in order to convey clear information on the activity and physical engagement of the immersed Virtual Reality user. Two use cases are presented to which this framework can be applied: vocational training on food hygiene and a virtual exhibition for architecture.

Participant_01:
With the current pandemic we have seen how important is face to face communication and that video chat does not provide all the expression details that real communication provides. Researchers have already started to investigate the fatigue that we experience after long video meetings and lectures. So the problem is even more prominent in Cross-Reality collaboration. How do you think this fatigue might affect future development and research in this area?

Participant_08:
I think that cross-reality interaction or scenarios will likely be limited to 'sessions' where time within cross-reality collaboration is limited. There's one inevitable part of this problem, where fatigue will occur in any activity that someone spends too much time in/on. On the other hand, making the sessions more 'dynamic' may increase the motivation or interest of the user more and perhaps offset fatigue. And of course, continuous ongoing research on how to improve simulation sickness would also improve the situation - in particular, any research identifying factors of simulation sickness specific to CR.


Participant_03:
You mainly focus on visuals, what about other modalities like sound? You mentioned the importance of attention. You did not mention pointing or gaze in your framework.

Participant_08:
We propose a framework for the selective augmented reality visualisation of virtual objects, and thus focus on the visual aspect of modalities - sound would be a new branch of study that would definitely be interesting to investigate, so perhaps it could be the next step for this proposal. Regarding other aspects such as gaze or pointing: Gaze would fall under the second criteria 'Field of View', which looks at visualising objects in the direction the VR user is looking at. Pointing is a trickier question. If the object is fairly close, then the VR user can physically point to one of the virtual objects being visualised within close proximity (Criteria 'Proximity Threshold'). If the object is very far, a small but extensive slice of the direction the VR user is pointing to can be visualised (Criteria 'Field of View'). Otherwise, perhaps we should look into 'Importance Ranking' (currently a predetermined selection) to enable the VR User to selectively visualise specific objects real-time to the AR user. Essentially the three criteria proposed in our framework refer to three different methods of measuring/defining augmented reality visualisation, and the values for each can be adjusted or adapted accordingly to the VR scenario.


Participant_10:
Hello! Regarding the paper on "The Body in Cross-Reality", what is the advantage of having an user in AR and another user in VR? Wouldn't interaction be easier if both the users shared the same reality?

Participant_08:
Hi, the work mainly looks at the importance of body language in showing some kind of information (ie. action, activity). This means that there would either need to be a realistic representation of the immersed user's body in the environment, or alternatively use AR to augment elements of the VE that the immersed user is interacting with, so that both the user's actions and virtual objects are overlaid in one reality. The question we are interested in looks at the latter. Examples could include more delicate hands-on work, or service work where minute gestures and facial expressions might be useful to know from a bystander's perspective. Although this would mainly look at scenarios where there is a VR trainee and an AR supervisor in terms of use cases.

Participant_10:
Thank you for your answer :slight_smile: Yes I was wondering about scenarios where this VR / AR interaction was necessary.


Participant_09:
Hi, thanks for all the great talks. My question is regarding "The Body in Cross-Reality" presentation. During your presentation, you mentioned that you are investigating understanding VR user actions. What are the actions that you are mainly targeted and how do you plan to recognise such actions? Thank you

Participant_08:
Hi, so using the example of a bar scenario (please note that this would be quite case-specific), we would simulate an interaction between the VR user as a bartender trainee and the AR user as a supervising spectator. The VR user would be given certain tasks as part of the training, for example making a cocktail, and the AR spectator/supervisor would evaluate the VR user's actions in terms of how well the VR user performed the task, whether the VR user made any mistakes when making the cocktail, how much time the VR user took to complete the task, etc. It's a proposal that focuses on the AR user's understanding of the VR user's actions to help inform how effective certain types of augmented reality visualisations are in communicating information.

Participant_09:
Thanks for the great answer on Zoom. This is an interesting study. :slight_smile:



Exploring Visuo-Tactile Embodiment in a Social Virtual Reality Setting with a Physical Wheelchair for Training Empathy Towards Social Disability Barriers
Jeremy Meijer and Nikolaos Batalas
PDF | Abstract | Video presentation | Discussion on the workshop
Interactions of personnel with patients in healthcare settings tend, as a norm, to be depersonalized and detached, failing to acknowledge that patients seek empathy from their caregivers. Experiential learning that allows trainees to understand the subjective experience of disability can be useful in the education of empathy, but disability is usually portrayed as a private impairment, and most scenarios fail to acknowledge its dimension of social construction. We plan to investigate the potential of an embodied VR experience, using a physical wheelchair as a controller in the VR space, to see whether visuo-tactile VR experiences with social barriers of disability enhance empathy in dutch medical students.

Participant_03:
Subjective experience of disability can be useful in the education of empathy. How do you plan to evaluate your educational goals of empathy?

Participant_18:
The involved research intervention aims to facilitate stimuli purposes in the research. Research discovered a decrease in empathy, but a fundamental course in empathy, index of state of empathy, and index of empathy evolvement over years of medical schooling are lacking.

Goals of healthcare curricula involve providing Problem-based learning environments, enhancing humanistic skills and attention to patients. State-of-the-art courses include empathy training and perspective-taking measures, to enhance the student’s humanistic skills, decreasing depersonalization. Evaluating empathy in healthcare curricula by means of a questionnaire within the in-between subject research setup does inventory empathy, but evaluation of educational goals often involve practical in-person measures (e.g. a teacher investigates the student’s skills in treating an actor patient). However, such evaluations have been criticised over time. Care approaches, as Integrative Medicine or Problem-based learning, aim to foster empathy increases and an understanding of the patent’s situation. Aspects of empathy involve seeing things from another’s perspective, allowing informed empathy and alignment of emotional reactions.

While research results have to substantiate the effectiveness of the involved intervention, the subjective experience of disability should allow curricula to provide students with perspective-taking practices. Educational goals could include the evolvement of empathy over a period of time. Extending the intervention in future versions can include a variety of empathy training situations, increasing in difficulty. Indexing empathy over the course, through the already included experience sampling mechanism. However, this includes extensive development of the intervention. Moreover, while a ‘Mind body medicine’ movement is rising within healthcare curricula, the educational goals for empathy concern an aim for detached concern within medicine.

Evaluation of educational goals of empathy involves a longitudinal study, to unveil effectiveness of the intervention on long-term as compared to more traditional approaches (e.g. role-models and paper cases). An in-between subject research could evaluate the differences in reported empathy, over the course of the entire study among the same group of students. Regarding the fact that healthcare practices occasionally occur after the second year of study.

However, the current research does not aim to evaluate such long-term effects.


Participant_03:
Body correlation, body ownership. Has this been explored in the area of creating disability applications in the past?

Participant_18:
Sense of Embodiment, body ownership, or body correlation are underexplored topics in the field of disability studies. Former research demonstrates, some attempts were made in exploring disability simulations through a sense of Embodiment. Investigation of embodiment of Schizophrenic experiences, compared to traditional manipulations of perspective taking, have been a topic of research. However, the repertoire of disability simulations mainly focus on physical and sensory impairments (e.g. hearing loss, mobility limitations, vision issues). Emphasizing stereotypes of incompetency and dependency. Some attempts were made on simulating cognitive deficits, but only a small degree of mobility limitations have been involving body ownership illusions (e.g. embodied experiences of red-green colorblind simulation, the ‘Proteus Effect’) exploring effects on attitudes and behaviour.


Participant_18 (comment):
Since the research is a work in progress, currently we intend to explore the effects of the aforementioned disability simulation by means of a randomized in-between subject research setup. Within this, we are aiming to expand the inventory of empathy levels through experience sampling methods, to prevent compromised results due to memory bias. For development purposes, currently a prototype on this is being documented and under development. For an idea on the actual wheelchair setup I'd like to refer to: https://qz.com/1064968/designers-at-fjord-built-a-vr-system-that-teaches-first-time-wheelchair-users-how-to-navigate-city-environments/

Currently I designed a use case of the inventory of empathy, and experiences related to embodiment, as a popup that occurs to the user within the VR experience (after exposure to the social barrier, causing perspective). Answers to the popup questionnaire questions are provided through the visuo-tactile controllers (attached to users' limbs).



Using a VR Field Study to Assess the Effects of Visual and Haptic Cues in "In-the-Wild" Locomotion
Ana Oliveira, Mohamed Khamis and Augusto Esteves
PDF | Abstract | Video presentation | Discussion on the workshop
This work aims to assess the effect of visual and haptic cues in users with gait impairments; not only in performance, but also in terms of usability, perceived cognitive load, and safety. These haptic cues were delivered via wrist-worn devices, with the goal of supporting these users while out in-the-wild – three types of haptic cues were tested. To further assess the impact of haptic and visual cues outside of a laboratory environment, we used a Virtual Reality Field Study to safely assess the impact of these cues in users’ awareness of their surroundings (measured via gaze hits and dwell). Despite conducting a preliminary study with participants not suffering from gait impairments (N=6), our results seem to indicate a positive effect of the haptic cues in regards to participant cadence, step length, and general awareness of their surroundings when compared to the visual cue. One of the simpler haptic cues was also the preferred stimulus by all participants.

Participant_03:
You have done your experiments with individuals who do not have gait problems. What are your expectations? Do you think your results will be more or less conclusive?
You are using VR as a method for studying real world settings. How generalizable are your results to the real world setting?

No written answer provided.



Programme committee

Kathrin Gerling (KU Leuven)
Wolfgang Hürst (Utrecht University)
Daniel Lopes (INESC-ID, University of Lisbon)
Kenny Mitchell (Edinburgh Napier University, Disney Research)
Joan Mora (Inflight VR)
Tanja Nijboer (Utrecht University)
Francisco Nunes (Fraunhofer Portugal)
Anasol Peña Rios (BT Research Labs)
Marko Tkalčič (University of Primorska)

Organisers

Florian Daiber

German Research Center for Artificial Intelligence (DFKI)