Autonomous agents to accelerate extended reality testing Extended reality systems (XR) are used in a wide range of sectors, yet they need to be tested before they can reach the commercial market. We spoke to Rui Filipe Fernandes Prada about the work of the iv4XR project in developing autonomous testing agents that promise to bring significant benefits to developers. An extended reality (XR)
system typically involves a representation of a virtual world, and many users quickly find themselves fully absorbed in that scenario, whether they’re in a flight simulation or playing a computer game. These systems are used in a wide variety of settings, from museums to combat training, but they first need to be tested before they can be applied, which is currently done in quite a labour-intensive way. “Currently a lot of human labour is required to test these XR systems, in the sense that you need users to test them,” says Rui Prada, coordinator of the iv4XR project. These testers are asked to try to perform specific tasks integral to the functioning of the system. “For example, the users might try to navigate from point A to point B in the virtual world, to interact with all the objects within it, or to combine objects,” explains Prada. “They are also asked to explore the system without much guidance. They might perform more open tasks such as to simply see what you can do, to try to finish a certain level in the game, or to find all the hidden objects. So in this case users have a bit more freedom.”
iv4XR project The aim in the iv4XR project is to automate two kinds of tests, using techniques from artificial intelligence. In one type of test there is a specific task that can be scripted, so it might be expected that it should be relatively simple to automate; however, Prada says this is a technically challenging task. “In these XR applications you need intelligence to adapt as things change. You cannot have a simple script which performs
a user when they perform a task, will they be happy with the result?” This may depend to a degree on whether a user is using the system for their own personal enjoyment or for training to develop their professional skills, and also whether they feel they are progressing. As part of the project, Prada plans to model the knowledge and skills of users, and to investigate how the structure of a system affects their ability to learn. “If we configure the
We are testing
the technical specifications of the systems, while we are also looking at the users. Will a person with a specific profile enjoy the game? Will they be able to perform the task, given the skills and knowledge that they have? well in XR – you need an intelligent agent to adapt,” he explains. The idea in the project is to essentially build models of individual users – autonomous testing agents – which can then test the system. “We are testing the technical specifications of the systems, while we are also looking at the users. Will a person with a specific profile enjoy the game? Will they be able to perform the task, given the skills and knowledge that they have?” outlines Prada. “Will the user be able to perform the tasks that they want to perform on the system? How does it feel for
levels of a game in a certain way, do users progress more quickly?” he asks. A user who fails to master the system as quickly as they may have expected may become discouraged, while another might quickly understand how it works then move on; by using multiple testing agents on a single system, researchers aim to build a fuller picture. “We try different profiles and we see how they use the system. That’s part of user experience testing,” continues Prada. “These autonomous testing
Screenshot from the space engineers game, Keen software.
Screenshot from the space engineers game (the game is developed by Keen software house).
42
EU Research