While writing concepts, technical documentations and tweets, we’ve realized that the XR4DRAMA consortium uses a lot of abbreviations and acronyms, many of which aren’t necessarily known to a non-expert audience. And even within the team, nobody has heard of all of them, as it’s almost impossible to be an expert in disaster management and media planning and data processing and computer linguistics and smart clothes and immersive technologies.
So in order to everybody a good overview, we’ve compiled this handy glossary – which also defines/explains some of the more exotic or difficult terms:
AR: Augmented reality (a type of extended reality where a users see their “real” environment, but with digital overlays)
CEM: Certified emergency manager
COWM: Citizen Observatories for natural hazards and Water Management
DP: Disaster Preparedness (coordinated actions taken to prepare for disaster, prevent them, or mitigate their impact)
DRR: Disaster Risk Reduction (coordinated actions that aim to reduce the damage caused by natural hazards, via an ethic of prevention)
ECCA: European Climate Change Adaptation Conference
ECG: Electrocardiogram (the process of recording the electrical activity of the heart, usually to check for cardiac abnormalities)
EGU: European Geosciences Union
EO data: Earth observation data
GIS: geographic information system (a conceptualized framework that provides the ability to capture and analyze spatial and geographic data)
GNSS: Global navigation satellite system
H2020: Horizon 2020 (an EU program for research and technological development – and the funding source of XR4DRAMA)
HCI: Human-computer interface (basically any device that lets human interact with a machine, e.g. a keyboard, a touchscreen, a dataglove)
IA: Innovation action (a certain type of R&D project; XR4DRAMA is an IA)
ISCRAM community: an international community of people working in the field of Information Systems for Crisis Response and Management
IFAFRI: The International Forum to Advance First Responder Innovation
LIDAR: Light detection and ranging (also: laser imaging, detection, and ranging): a remote sensing method that uses light (a pulsed laser) to measure variable distances
MR: Mixed reality (like AR, but with interactive virtual objects anchored in the real world)
NLP/NLProc: Natural language processing (a subfield of computer science/artifical intelligence and linguistics that is focused on creating programs to handle and analyze large amounts of natural – i.e. human – languages.
NLU: Natural language understanding (a subfield of NLP/NLProc that is focused on creating program to comprehend what has been collected and processed)
PPE: Personal protective equipment
R&D: Research & development
SAR: Synthetic Aperture Radar (a type of radar used to create 2D or 3D reconstructions of landscapes and objects)
SVI mapping: Social vulnerability index mapping (efforts to visualize/map U.S. Census data that determines the social vulnerability of specific geographic regions)
VR: Virtual reality (a type of extended reality where users don’t see their “real environment”, but are rather fully immersed in a digital sphere)
WWS: Wearable wellness system (a body worn system designed to monitor all kinds of physiological parameters)
XR: Extended reality (AR + MR + VR + all other forms of immersive media)
In this series of blog interviews, we further introduce the people and organizations behind XR4DRAMA by asking them about their work and their particular interest in the project. Our first interview partner is Martina Monego of AAWA, an Italian public body dedicated to the management and regulation of the Alpi Orientali (Eastern Alps) hydrographic district.
1) Martina, when did you first get in touch with the concept of situation awareness (SA) and or XR technology?
As a disaster manager, I’ve been familiar with SA for quite a while, but XR is a rather new thing to me. However, next to XR4DRAMA, I’ve also been involved in a (still unnamed) project that uses immersive technology for educational and training purposes. This one is about better engagement in learning processes and helping students improve their visualization skills. The basic idea is to simulate a flood in a very realistic way, so students can better understand the risks, the relevant aspects of preparing for a disaster like this, and the right behavior in case of emergency.
2) Who is in your team – and what are your colleagues working on at the moment?
The team consists of Michele Ferri, Daniele Nobiato, Franceso Zaffanella and myself. Michele Ferri is our development and innovation manager, in charge of coordinating hydrological research in the context of flood risk management, and the scientific lead. Daniele, Francesco and I are experts in hydrologic and hydraulic modelling, which includes data assimilation, flood forecasting, and flood risk assessment. As a team, we’re responsible for all kinds of projects. For example, we’re currently implementing a so-called Citizen Observatory (CO) on water in our district. In the scope of this CO, citizens provide information (e.g. on water and snow levels or flooded areas) via a mobile app. We then combine this information with other data and use it for early warning systems. The goal is to get a better picture of developments before and during a flood event and to facilitate communications between citizens, authorities, and agents in the field. In this way, we can increase the effectiveness of civil protection efforts. We also do presentations and training sessions for teachers, students, and civil protection volunteers.
3) From your point of view: What are the most interesting aspects about XR4DRAMA?
Well, during disasters like floods, decision makers and first responders face a lot of stress – and need to understand the situation as clearly as possible, so they can act promptly, make the right call, and not waste valuable resources. X4DRAMA will hopefully help them do a better job – and stay safe. At AAWA, we’re very interested in achieving a level of situation awareness that is as detailed and reliable as possible. In our pilot – which we’ll explain in detail later on – we can hopefully do two very interesting things: In phase number one, we’ll collect web information, sensor data, and other sources to predict and simulate a specific scenario, so that control room staff can check and verify all necessary emergency procedures.
4) What could be a challenge for the consortium?
In my opinion, the main challenge is to have enough data and repositories available, so we can get to a good level or SA – or the representation thereof – and fully exploit the potential of XR. Furthermore, our simulation should not be about high-end scenography, but about meaningful and tailor-made content that serves disaster managers. We need to effectively use the technology to support the work and ensure the safety of our first responders.
5) When the project is over in late 2022, what kind of outcome do you expect?
Our vision is to have an innovative and effective tool that improves emergency management in the control room, increases the safety of rescue units, and optimizes our resources.
Dear Martina, thank you for your time!
Our next interview partner will be Sotiris Diplaris of CERTH.
At the core of our project, there is always situation awareness (SA). Just in case the term is not familiar yet: SA describes how humans perceive the elements of a given environment within spatial and temporal confinements – and how that perception affects their performance and decision-making in the situation at hand. SA has become particularly important where decision-making happens under time pressure, remotely or among multiple operators, e.g. at a public authority managing natural disasters and sending out first responders – or at a media organization preparing for an outdoor TV production.
Perception, Comprehension, and Projection
Classic models – like the one that goes back to Mica Endsley – define three levels of SA:
The first and most basic level of SA is about monitoring, spotting, and recognizing things. L1SA is achieved as agents become aware of different elements a situation (e.g. other people and objects) and are also able to detect their status (condition, location etc.).
The second level of SA is about recognizing, interpreting, and evaluating the lie of the land. L2SA is achieved when agents understand what is going on around them (at present) – and what that means for their objectives.
The third and highest level is about projections and predictions. L3SA is achieved when agents extrapolate the information they have collected in L1SA and L2SA and are thus able to gain insights on what the situation (and elements therein) will probably look like in the future – and in what ways the mission may have to be adjusted.
Endsley’s model has been criticized, revised and extended since it was first published in 1995, but for the sake of simplicity – and because this is not an academic forum – we will not go into further details. Instead, we would rather like to briefly explain how the three levels of SA in XR4DRAMA are both similar to and different from the influential classic model outlined above.
Increasingly sophisticated renderings of locations
XR4DRAMA wants to build a digital platform for people who are – remotely or directly – planning for and dealing with events and incidents in a specific location. Just like in the classic model, the XR4DRAMA SA levels will be subsequent ones that build on each other and go from low to high complexity. However, while Endsley et al. start out with simple perception and aim for sophisticated projections, the XR4DRAMA platform will rather focus on providing as much information as possible to achieve good enough or very good comprehension. That being said, the level of detail will always depend on the specific use case and the time that is available to users of the platform.
In any case, the consortium foresees a platform that makes use of three different levels:
XR4DRAMA L1SA will be a simple, yet appealing visualisation/representation of a location that includes first information on geography, sociographics etc. as well as a couple of images and/or videos. L1SA is created automatically and relies on data from a number of (publicly available) web services.
XR4DRAMA L2SA will be an enhanced visualisation/representation that draws on recent, exclusive content and updated information stemming from people with gadgets and sensors who are operating in the field (first responders or location scouts). We will have the XR4DRAMA system process their data and combine it with what has already been collected and visualized for XR4DRAMA L1SA.
XR4DRAMA L3SA will be a complex and comprehensive representation of a specific location, close to a simulation of an event within that environment. Here, the platform aggregates all the information from L1SA and L2SA and also allows users to immerse themselves in the situation via VR or AR features and tools (with a possible extra focus on more sophisticated audio). Ideally, this representation also enables test runs of specific strategies and methods ( e.g. the simulation of camera movements in the media use case), thus connecting with the concept of projection/prediction in classic L3SA.
In general, the consortium envisions a solution where all levels and functions are accessible to all stakeholders in a scenario, albeit in graduations and on different devices (because nobody wants to wear a VR google in the field). As already mentioned on the project vision page, the ultimate dream is to create a platform that enables shared, distributed SA.
We just wanted to let you know that our EU-funded Innovation Action (IA) has been officially kicked off! November 25th and 26th saw us meet online to discuss and organize all the research, development and communication tasks that lie ahead in the next two years. Using XR technology (and lots of data) for a new kind of situational awareness means breaking uncharted territory – but it looks like we’re well prepared.
We had a long, productive, enjoyable video conference – with only one drop of bitterness: Everybody would’ve loved to meet afk/irl in Thessaloniki (where XR4DRAMA coordinator CERTH is headquartered) – and go for social dinner at the end of the business day. Maybe that will happen sometime in 2021 or 2022, fingers crossed.
Getting back to the IA itself, you can already read quite a bit about the project vision on this page. And to learn more about the consortium behind XR4DRAMA, just go here.
This blog section will keep you up-to-date with regard to deliverables, milestones, insights, prototypes, events, and so on. The next couple of posts will give you a more elaborate intro to the concept of situational awareness and describe the two pilot use cases we have in store.
So stay tuned. And don’t forget to join the conversation on Twitter and Linkedin!
Best regards from Athens, Barcelona, Berlin, Bonn, Cologne, Prato, Thessaloniki, and Venice,
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.