Meet the XR4DRAMA Consortium (II): Sotiris Diplaris and the Centre for Research and Technology – Hellas (CERTH)

4 out 5 team members: Sotiris Diplaris (2nd from the left) with his colleagues Stefanos Vrochidis, Ioannis Kompatsiaris, and Anastasios Karakostas (from left to right)

In this series of blog interviews, we further introduce the people and organizations behind XR4DRAMA by asking them about their work and their particular interest in the project. Our second interview partner is Dr Sotiris Diplaris of ITI/CERTH, the Information Technology Institute at one of Greece’s (and Europe’s) leading research centres.

1) When did you first get in touch with the concept of situation awareness and XR technology?

It’s hard to say when exactly, but we’ve worked on SA- and XR-related projects for quite a while now. For instance, there was beAWARE, which focused on decision making in emergency situations and also featured AAWA as a partner in disaster management. beAWARE led to the idea of employing XR tech for real-time situation monitoring and collaboration between field teams and control room staff. But we’ve also had several talks with DW, who told us that media professionals would really benefit from having detailed, immersive models of a filming location – before going there with a production team. As for XR tech, we’ve explored that in projects like MindSpaces, SO-CLOSE, and V4Design (the latter also together with DW).

2) Who is in your team – and what are your colleagues working on at the moment?

Dr Stefanos Vrochidis does overall coordination. He’s the head of M4D; that’s the Multimodal Data Fusion and Analytics Group of the Multimedia Knowledge and Social Media Analytics Lab (MKLab) of ITI-CERTH. Stefanos leads a project management team that consists of three more people: Dr. Anastasios Karakostas supports the coordination of XR4DRAMA by supervising research activities and use cases.  Spyridon Symeonidis conducts the daily technical management and the implementation of the web data collection mechanisms. As for me, my duties are the scientific and technical management of XR4DRAMA. Finally, there’s Dr. Ioannis Kompatsiaris. As head of MKLab, he’s involved in the high level supervision of the project.

…and the team member missing in the group photo (hi there, Pandemic): Spyridon Symeonidis

3) From your point of view: What are the most interesting aspects about XR4DRAMA?

The most exciting part of this project is that we use several technologies to improve areas of work and life that are really important and depend on situation awareness and decision making. Hopefully, we’ll be able to show how realistic VR or semantically enriched AR representations of environments can improve the way professionals work together. The concepts and applications we’re developing will add a whole new dimension to workflows that has never been explored before. 

4) What could be a challenge for the consortium?

The main challenge lies in combining all kinds of data sources and technologies that usually belong to different research fields. We have multimedia web content, drones and satellites, and sensors that enrich our repositories with environmental and physiological information. There’s analysis on a text, audio, video and sensor level, and interlinked results created via semantic technologies. 3D reconstruction techniques generate the elements of immersive environments while geographic information systems (GIS) are used to properly place the content in 3D space. It’s easy to understand that it won’t be trivial to make optimal use of all these components. At the same time, our designated users will only benefit if we put it all together really well.

5) When the project is over in late 2022, what kind of outcome do you expect?

We hope that the end result of XR4DRAMA will be a complete set of XR software that has been evaluated in two real world use cases, namely disaster management and media production planning. The infrastructure we have will be versatile and suitable for other scenarios and domains. Front-end tools will include a desktop and a mobile application as well as an AR and VR environment. We’ll equip professionals with a powerful solution that makes their work easier and safer. And of course, depending on the use case, the impact of XR4DRAMA may go well beyond professionals in the field: For example in the event of a flood, better decision making and disaster management will have positive effects for all people in an affected community.

Dear Sotiris, thank you for you time!

Our next interview partner will be Nicolaus Heise of DW.

CEM, HCI, SAR? – A Brief Guide to the XR4DRAMA Alphabet Soup

While writing concepts, technical documentations and tweets, we’ve realized that the XR4DRAMA consortium uses a lot of abbreviations and acronyms, many of which aren’t necessarily known to a non-expert audience. And even within the team, nobody has heard of all of them, as it’s almost impossible to be an expert in disaster management and media planning and data processing and computer linguistics and smart clothes and immersive technologies.

So in order to everybody a good overview, we’ve compiled this handy glossary  – which also defines/explains some of the more exotic or difficult terms: 

  • AR: Augmented reality (a type of extended reality where a users see their “real” environment, but with digital overlays)
  • CEM: Certified emergency manager
  • COWM: Citizen Observatories for natural hazards and Water Management
  • DP: Disaster Preparedness (coordinated actions taken to prepare for disaster, prevent them, or mitigate their impact)
  • DRR: Disaster Risk Reduction (coordinated actions that aim to reduce the damage caused by natural hazards, via an ethic of prevention)
  • ECCA: European Climate Change Adaptation Conference
  • ECG: Electrocardiogram (the process of recording the electrical activity of the heart, usually to check for cardiac abnormalities)
  • EGU: European Geosciences Union
  • EO data: Earth observation data
  • GIS: geographic information system (a conceptualized framework that provides the ability to capture and analyze spatial and geographic data)
  • GNSS: Global navigation satellite system
  • H2020: Horizon 2020 (an EU program for research and technological development – and the funding source of XR4DRAMA)
  • HCI: Human-computer interface (basically any device that lets human interact with a machine, e.g. a keyboard, a touchscreen, a dataglove)
  • IA: Innovation action (a certain type of R&D project; XR4DRAMA is an IA)
  • ISCRAM community: an international community of people working in the field of Information Systems for Crisis Response and Management
  • IFAFRI: The International Forum to Advance First Responder Innovation
  • LIDAR: Light detection and ranging (also: laser imaging, detection, and ranging): a remote sensing method that uses light (a pulsed laser) to measure variable distances
  • MR: Mixed reality (like AR, but with interactive virtual objects anchored in the real world)
  • NLP/NLProc: Natural language processing (a subfield of computer science/artifical intelligence and linguistics that is focused on creating programs to handle and analyze large amounts of natural – i.e. human – languages.
  • NLU: Natural language understanding (a subfield of NLP/NLProc that is focused on creating program to comprehend what has been collected and processed)
  • PPE: Personal protective equipment
  • R&D: Research & development
  • SAR: Synthetic Aperture Radar (a type of radar used to create 2D or 3D reconstructions of landscapes and objects)
  • SVI mapping: Social vulnerability index mapping (efforts to visualize/map U.S. Census data that determines the social vulnerability of specific geographic regions)
  • VR: Virtual reality (a type of extended reality where users don’t see their “real environment”, but are rather fully immersed in a digital sphere)
  • WWS: Wearable wellness system (a body worn system designed to monitor all kinds of
    physiological parameters)
  • XR: Extended reality (AR + MR + VR + all other forms of immersive media)

Have we missed a term that’s important in the context of XR4DRAMA?
Send us an email: consortium@xr4drama.eu

Meet The XR4DRAMA Consortium (I): Martina Monego and the Alto Adriatico Water Authority (AAWA)

Martina Monego during a surveying mission in her district (Eastern Alps)

In this series of blog interviews, we further introduce the people and organizations behind XR4DRAMA by asking them about their work and their particular interest in the project. Our first interview partner is Martina Monego of AAWA, an Italian public body dedicated to the management and regulation of the Alpi Orientali (Eastern Alps) hydrographic district.

1) Martina, when did you first get in touch with the concept of situation awareness (SA) and or XR technology?

As a disaster manager, I’ve been familiar with SA for quite a while, but XR is a rather new thing to me. However, next to XR4DRAMA, I’ve also been involved in a (still unnamed) project that uses immersive technology for educational and training purposes. This one is about better engagement in learning processes and helping students improve their visualization skills. The basic idea is to simulate a flood in a very realistic way, so students can better understand the risks, the relevant aspects of preparing for a disaster like this, and the right behavior in case of emergency.

2) Who is in your team – and what are your colleagues working on at the moment?

The team consists of Michele Ferri, Daniele Nobiato, Franceso Zaffanella and myself. Michele Ferri is our development and innovation manager, in charge of coordinating hydrological research in the context of flood risk management, and the scientific lead. Daniele, Francesco and I are experts in hydrologic and hydraulic modelling, which includes data assimilation, flood forecasting, and flood risk assessment. As a team, we’re responsible for all kinds of projects. For example, we’re currently implementing a so-called Citizen Observatory (CO) on water in our district. In the scope of this CO, citizens provide information (e.g. on water and snow levels or flooded areas) via a mobile app. We then combine this information with other data and use it for early warning systems. The goal is to get a better picture of developments before and during a flood event and to facilitate communications between citizens, authorities, and agents in the field. In this way, we can increase the effectiveness of civil protection efforts. We also do presentations and training sessions for teachers, students, and civil protection volunteers.

Daniele Nobiato and Franceso Zaffanella in the Vicenza municipality control room

3) From your point of view: What are the most interesting aspects about XR4DRAMA?

Well, during disasters like floods, decision makers and first responders face a lot of stress – and need to understand the situation as clearly as possible, so they can act promptly, make the right call, and not waste valuable resources. X4DRAMA will hopefully help them do a better job – and stay safe. At AAWA, we’re very interested in achieving a level of situation awareness that is as detailed and reliable as possible. In our pilot – which we’ll explain in detail later on – we can hopefully do two very interesting things: In phase number one, we’ll collect web information, sensor data, and other sources to predict and simulate a specific scenario, so that control room staff can check and verify all necessary emergency procedures.

4) What could be a challenge for the consortium?

In my opinion, the main challenge is to have enough data and repositories available, so we can get to a good level or SA – or the representation thereof – and fully exploit the potential of XR.  Furthermore, our simulation should not be about high-end scenography, but about meaningful and tailor-made content that serves disaster managers. We need to effectively use the technology to support the work and ensure the safety of our first responders.

5) When the project is over in late 2022, what kind of outcome do you expect?

Our vision is to have an innovative and effective tool that improves emergency management in the control room, increases the safety of rescue units, and optimizes our resources.

Dear Martina, thank you for your time!

Our next interview partner will be Sotiris Diplaris of CERTH.

Situation Awareness: Classic Levels and New Concepts in XR4DRAMA

At the core of our project, there is always situation awareness (SA). Just in case the term is not familiar yet: SA describes how humans perceive the elements of a given environment within spatial and temporal confinements – and how that perception affects their performance and decision-making in the situation at hand. SA has become particularly important where decision-making happens under time pressure, remotely or among multiple operators, e.g. at a public authority managing natural disasters and sending out first responders – or at a media organization preparing for an outdoor TV production.

Perception, Comprehension, and Projection


Classic models – like the one that goes back to Mica Endsley – define three levels of SA:

  1. Perception

    The first and most basic level of SA is about monitoring, spotting, and recognizing things. L1SA is achieved as agents become aware of different elements a situation
    (e.g. other people and objects) and are also able to detect their status (condition, location etc.).
  1. Comprehension

    The second level of SA is about recognizing, interpreting, and evaluating the lie of the land. L2SA is achieved when agents understand what is going on around them (at present) – and what that means for their objectives.

  2. Projection

    The third and highest level is about projections and predictions. L3SA is achieved when agents extrapolate the information they have collected in L1SA and L2SA and are thus able to gain insights on what the situation (and elements therein) will probably look like in the future – and in what ways the mission may have to be adjusted.

Endsley’s model has been criticized, revised and extended since it was first published in 1995, but for the sake of simplicity – and because this is not an academic forum – we will not go into further details. Instead, we would rather like to briefly explain how the three levels of SA in XR4DRAMA are both similar to and different from the influential classic model outlined above.


Increasingly sophisticated renderings of locations

First mock-up of a possible XR4DRAMA dashboard providing already enhanced information
on a location (L2SA) – but no immersive mode (L3SA) yet.


XR4DRAMA wants to build a digital platform for people who are – remotely or directly – planning for and dealing with events and incidents in a specific location. Just like in the classic model, the XR4DRAMA SA levels will be subsequent ones that build on each other and go from low to high complexity. However, while Endsley et al. start out with simple perception and aim for sophisticated projections, the XR4DRAMA platform will rather focus on providing as much information as possible to achieve good enough or very good comprehension. That being said, the level of detail will always depend on the specific use case and the time that is available to users of the platform.

In any case, the consortium foresees a platform that makes use of three different levels:

  1. Simple Mode

    XR4DRAMA L1SA will be a simple, yet appealing visualisation/representation of a location that includes first information on geography, sociographics etc. as well as a couple of images and/or videos. L1SA is created automatically and relies on data from a number of (publicly available) web services.

  2. Enhanced Mode

    XR4DRAMA L2SA will be an enhanced visualisation/representation that draws on recent, exclusive content and updated information stemming from people with gadgets and sensors who are operating in the field (first responders or location scouts). We will have the XR4DRAMA system process their data and combine it with what has already been collected and visualized for  XR4DRAMA L1SA.

  3. Immersive Mode

    XR4DRAMA L3SA will be a complex and comprehensive representation of a specific location, close to a simulation of an event within that environment. Here, the platform aggregates all the information from L1SA and L2SA and also allows users to immerse themselves in the situation via VR or AR features and tools (with a possible extra focus on more sophisticated audio). Ideally, this representation also enables test runs of specific strategies and methods ( e.g. the simulation of camera movements in the media use case), thus connecting with the concept of projection/prediction in classic L3SA.

In general, the consortium envisions a solution where all levels and functions are accessible to all stakeholders in a scenario, albeit in graduations and on different devices (because nobody wants to wear a VR google in the field). As already mentioned on the project vision page, the ultimate dream is to create a platform that enables shared, distributed SA.


Photo by Cameron Venti on Unsplash

Hello world, this is XR4DRAMA!

19 (out of 27) members of the XR4Drama consortium representing 7 organizations from 4 countries.

We just wanted to let you know that our EU-funded Innovation Action (IA) has been officially kicked off! November 25th and 26th saw us meet online to discuss and organize all the research, development and communication tasks that lie ahead in the next two years. Using XR technology (and lots of data) for a new kind of situational awareness means breaking uncharted territory – but it looks like we’re well prepared.

We had a long, productive, enjoyable  video conference – with only one drop of bitterness: Everybody would’ve loved to meet afk/irl in Thessaloniki (where XR4DRAMA coordinator CERTH is headquartered) – and go for social dinner at the end of the business day. Maybe that will happen sometime in 2021 or 2022, fingers crossed.

Getting back to the IA itself, you can already read quite a bit about the project vision on this page. And to learn more about the consortium behind XR4DRAMA, just go here.

This blog section will keep you up-to-date with regard to deliverables, milestones, insights, prototypes, events, and so on. The next couple of posts will give you a more elaborate intro to the concept of situational awareness and describe the two pilot use cases we have in store.

So stay tuned. And don’t forget to join the conversation on Twitter and Linkedin!

Best regards from Athens, Barcelona, Berlin, Bonn, Cologne, Prato, Thessaloniki, and Venice,

The XR4DRAMA Consortium