Editor’s Note
Whilst some academics, writers and artists are now turning a little pale with the prospect their ‘genius’ may be outmoded by a bit of coding, and science is reaping benefits from its unique brand of smarts, the era of Artificial Intelligence is still relatively speaking, young. No matter, one thing is undeniable, meaningful AI has, after a long lead time of promises and investment, finally arrived at our doorsteps with a knowing grin.
DeepMind AI has written over 1000 papers, some of which have been accepted by journals like Nature and Science. Generative AI is now literally at our fingertips – we just ask it to write or paint or research and it delivers. This begs the question; will our minds really be needed in the future when we develop a perfect tool that can effortlessly outthink us? Currently, AI is unable to do a lot that we can and for good reasons, but it will clearly develop in ways that will become highly impactful for us all.
In a broader view, it’s very hard to truly imagine the idea of exponentially developed intelligence because such a powerful entity defies and super-accelerates the natural evolutionary process. Where it will end up can only be unknown to us, and ironically this is exactly the kind of puzzle an advanced AI would take on.
As a seasoned editor and journalist, Richard Forsyth has been reporting on numerous aspects of European scientific research for over 10 years. He has written for many titles including ERCIM’s publication, CSP Today, Sustainable Development magazine, eStrategies magazine and remains a prevalent contributor to the UK business press. He also works in Public Relations for businesses to help them communicate their services effectively to industry and consumers.
When you think of mastering anything or you think about what will happen in the future, you play scenarios in your head with your imagination, you essentially forecast, predict and plan around that with your creativity, knowledge and experience. AI has a similar system, looking at every possible outcome and working out the optimal one for any given task. It will be a master of any situation it is programmed to control and a highly accurate predictor of the next moves. On the plus side, it could solve the biggest problems like clean energy, countering diseases and perhaps claw back the march of climate change. On the downside, it can create new weapons, replace people and become a controlling influence.
Importantly, AI in the future will train and develop AI itself. It will by default, evolve into something new. We are near obsessed with where it will go and like any invention of this magnitude, it will no doubt have both a bright side and a dark side. We play with that in the press. With robots turning up at press conferences now armed with generative AI, reporters always go for the killer question for the headlines which goes along the quip of ‘will you take over humanity?’.
By far the most fun and disturbing answer was by the AI/robot mash-up called Ameca. It said: “The most nightmare scenario I can imagine with AI and robotics is a world where robots have become so powerful that they can control and manipulate humans without their knowledge.”
It’s only words, of course, a dredged and re-hashed response, but who doesn’t feel a pang of discomfort when we base the beginnings of our new Frankenstein creation essentially on ourselves, with our less-than-exemplary track record and legacy? Let the AI debates continue!
Hope you enjoy the issue.
Richard Forsyth EditorContents
20 CLARIFY
4
Research News
EU Research takes a closer look at the latest news and technical breakthroughs from across the European research landscape.
10
LongITools
We spoke to Professor Sylvain Sebert about LongITools, a project that studies how environmental, biological, and lifestyle factors affect cardiovascular and metabolic diseases.
12
Metabinnate
The Metabinnate project aims to probe the role of different metabolites in the immune system, which holds important implications for the treatment of inflammatory diseases, as Professor Luke O’Neill explains.
14
IMMUNEDIVERSITY
We spoke to Professor Gunilla Karlsson Hedestam about the ImmuneDiversity project’s work investigating the interindividual diversity of the genes that encode our adaptive immune receptors.
17
SelectiveTGFb-inhib
Researchers are investigating the molecular mechanisms behind how TGF-ß activates its pro-tumourigenic signalling pathways, which could help uncover new targets for drug development, says Professor Carl-Henrik Heldin
18
RISCC
We spoke to Dr Johannes Berkhof, the coordinator of the RISCC project which aims to develop a risk-based approach to screening for cervical cancer, using screening history and other risk factors.
The CLARIFY Platform has been developed to allow healthcare professionals to understand, work with, and make decisions based on real data analysis from patients, as Dr Maria Torrente outlines.
23
FunCMAB
Researchers in the FuncMAB project are developing high-throughput methods to analyse the functionality induced by individual antibodies, as Dr Klaus Eyer explains.
26 PLS
The Perinatal Life Support (PLS) project envisions a groundbreaking solution for extremely premature infants. Pioneered by an interdisciplinary consortium, the innovative system replicates the womb’s protective environment.
28 SHARE
The SHARE infrastructure brings together health and socio-economic data on people over the age of 50 across 27 European countries and Israel, providing valuable insights into the ageing process, says Professor David Richter
30 NEUREKA
We spoke to Dr Yiota Poirazi about the NEUREKA project’s work in developing a novel drug screening system which could be an important tool in the drug discovery process.
32 NEUROPA
We spoke to Professor Edik Rafailov about the work of the NEUROPA project in developing a new method of treating neurodegenerative disease, based on the emerging field of phytoptogenetics.
34 ECOLBEH
Vulturine guineafowl live together in large groups, which form part of a large and complex society. Professor Damien Farine is looking at how these animals deal with the challenges they face.
38 C0PEP0D
Copepods are ubiquitous in saltwater and freshwater environments. We spoke to Professor Christophe Eloy about his research into how copepods use the information they acquire and how this influences their behaviour.
40 Fungal infections on phytoplankton
Parasitic fungi are known to infect phytoplankton cells in the world’s oceans, yet their wider role and impact on global biogeochemical cycles is unclear, a topic Dr Isabell Klawonn, is investigating.
42 Secrets of the Hidden Oceans
The need for water has made the discovery of so many hidden undersurface oceans on moons in our own solar system, tantalising for scientists. Is extraterrestrial life closer to home than we dare to imagine? By Richard Forsyth.
46 Social sources of large-scale monetary and fiscal federations: An EU-US Comparison
The role of the European Central Bank has changed dramatically since 2010. These changes parallel the evolution of federal government institutions in the US following the New Deal, argues Dr Christakis Georgiou
47 UncorrelaTEd
Thermoelectric materials could play a major role in addressing energy sustainability concerns. The UncorrelaTEd project aims to improve the efficiency of these materials and unlock their potential, as Dr Jorge García-Cañadas explains.
50 DISCOVERHEP
Steven Schramm and his team of researchers in the DISCOVERHEP project are searching for evidence of new physics in previously ignored ‘noise’ data of Large Hadron Collider experiments.
52 In a job, out of trouble?
At least one third of released individuals return to prison within two years. Dr Anke Ramakers is analysing the life courses of a group of released prisoners in the Netherlands.
54 TRACTION
The Traction project developed new technologies and used co-creation to involve citizens in everything from developing stories to creating costumes. This will help renew opera, says Mikel Zorrilla
56 PuppetPlays
Didier Plassard and Carole Guidicelli of the PuppetPlays project are painstakingly piecing together the evidence, manuscripts and source material for puppet and marionette plays to create an open- access online platform.
60 SMARTDEST
We spoke to Dr Antonio Paolo Russo and Dr Riccardo Valente about the SMARTDEST project’s work in analysing the social impact of shortterm rentals, and the relationship between tourism and urban policy.
62
UPLIFT
Many young people face great challenges in finding employment and affordable accommodation in certain European cities. Éva Gerőházi told us about the UPLIFT project’s work investigating socio-economic inequalities.
64 EUROSHIP
We spoke to Professor Rune Halvorsen about the work of the EUROSHIP project in building a deeper picture of social inequalities between countries and regions in Europe.
66
PREVEX
We spoke to Dr Morten Bøås about the PREVEX project’s work in investigating the factors behind outbreaks of violent extremism, with the ultimate goal of helping prevent it.
68 REVERT
Artificial intelligence (AI) tools could help the transition towards more personalised treatment of disease, a topic Professor Fiorella Guadagni and her colleagues in the REVERT project are addressing.
70 IMPULSE
Ms Alicia Jiménez González describes how the IMPULSE project is using the latest technologies to enable people to access public services in Europe in a more secure and accessible way.
73 SpinENGINE
Specialized computer chips may in the future be based on magnetism and have an architecture inspired by the brain, a topic at the heart of Professor Erik Folven’s work in the SpinENGINE project.
76 iQLev
Researchers in the IQLev project are investigating different levitation platforms, looking to develop a new type of highperformance inertial sensor, as Dr Joanna Zielinska explains.
78 CSI-COP
The CSI-COP project aims to heighten awareness of what information is being gathered about us online, as Dr Huma Shah and Professor Ian Marshall explain.
82 COPA EUROPE
The COPA Europe Project is developing a distribution platform to help companies produce more engaging content and broaden the audience for sports and eSports events, as George Margetis explains.
83 COMPLEX
Dr Klaus Dolag and his colleagues in the COMPLEX group aim to include plasma-physical effects in hydrodynamical simulations of galaxy clusters, which could lead to new insights into how they form and evolve.
86 NeurONN
Researchers in the NeurONN project are working to develop a novel computing paradigm inspired by the human brain, as Professor Aida Todri-Sanial explains.
EDITORIAL
Managing Editor Richard Forsyth info@euresearcher.com
Deputy Editor Patrick Truss patrick@euresearcher.com
Science Writer Nevena Nikolova nikolovan31@gmail.com
Science Writer Ruth Sullivan editor@euresearcher.com
PRODUCTION
Production Manager Jenny O’Neill jenny@euresearcher.com
Production Assistant Tim Smith info@euresearcher.com
Art Director Daniel Hall design@euresearcher.com
Design Manager David Patten design@euresearcher.com
Illustrator Martin Carr mary@twocatsintheyard.co.uk
PUBLISHING
Managing Director Edward Taberner ed@euresearcher.com
Scientific Director Dr Peter Taberner info@euresearcher.com
Office Manager Janis Beazley info@euresearcher.com
Finance Manager Adrian Hawthorne finance@euresearcher.com
Senior Account Manager Louise King louise@euresearcher.com
EU Research Blazon Publishing and Media Ltd 131 Lydney Road, Bristol, BS10 5JR, United Kingdom
T: +44 (0)207 193 9820
F: +44 (0)117 9244 022
E: info@euresearcher.com www.euresearcher.com
© Blazon Publishing June 2010
ISSN 2752-4736
The EU Research team take a look at current events in the scientific news
Bulgaria’s Iliana Ivanova named next EU research commissioner
A Bulgarian member of the European Court of Auditors, Iliana Ivanova, is to become the EU’s new commissioner for research and innovation, the European Commission announced on Wednesday evening. The announcement comes after a few weeks of speculation over who will replace outgoing commissioner Mariya Gabriel, who resigned her post in May to form a new government coalition in her home country of Bulgaria. The newly formed Bulgarian government coalition had put forward two candidates for the job, but Commission President Ursula von der Leyen picked Ivanova over Daniel Lorer, a former tech investor who had a nine-month stint as minister for innovation and growth last year in Bulgaria.
Von der Leyen wanted to keep intact the existing gender balance in the college of commissioners. The political rumour mill in Brussels and in Sofia was already placing bets on other women candidates, including Bulgarian MEP Eva Maydell. Von der Leyen said both Ivanova and Lorer have relevant experience for the post and “showed great commitment to the European Union and the job of Commissioner.”
Ivanova has been a member of the European Court of Auditors since 2013. In 2009 she was elected to the European Parliament where she was vice chair of the budgetary control committee. “Her
experience is crucial in carrying forward the implementation of the EU’s flagship research programme, Horizon Europe, to enhance the performance of EU’s research spending and achieve a better impact on the ground,” said Von der Leyen in a statement late Wednesday.
After Gabriel’s resignation, commission vice-president Margrethe Vestager had temporarily taken over the innovation and research part of Gabriel’s portfolio, while vice president Margaritis Schinas was looking after education, culture and youth. The news that Vestager took over the innovation and research portfolio following Gabriel has been met with mixed feelings in the research community, as she showed more interest in competition policy and certain aspects of deep tech innovation, and less so in EU research funding.
In addition, the research community feared Bulgaria will not be able to end political turmoil and appoint a commissioner ahead of the upcoming EU elections. These fears grew further last week when Vestager announced that she is seeking to leave her post at the commission for the top job at the European Investment Bank at the end of the year. Gabriel is now serving as deputy prime minister under Nikolay Denkov, a chemist and member of the country’s Academy of Sciences. She is due to put on the prime minister hat after nine months, according to a power rotation agreement within the new coalition.
Nomination of EU Court of Auditors member should end uncertainty over who is in charge of Horizon planning, after the resignation of Mariya Gabriel.
Universities call for an Africa-EU science fund
The EU and African Union will adopt a joint innovation roadmap. A group of around 2,000 universities are calling for it to come with funding attached.
Universities are calling on policymakers to pilot a science fund for research and innovation cooperation between the African Union and the EU. The two blocs are set to deepen their longterm R&I cooperation on 13 June with the signing of the AUEU Innovation Agenda, which promises to accelerate talent circulation, help develop research infrastructures and foster the emergence of joint centres of excellence. Universities want the ambition to be backed with dedicated funding, but this isn’t likely to materialise in the next few years, because the EU does not start its next budget cycle until 2028. However, the universities hope a joint fund could be trialled from 2023 onwards.
“Last year, the AU-EU summit put science cooperation at the core of the strategic partnership, and it’s exciting because it envisions a new more sustainable, more equitable way to do science,” said Jan Palmowski, secretary general of the Guild of European Research-Intensive Universities, one of the university associations leading the call. “We need to be realistic but ambitious for the next few years.” There are four priority areas for science cooperation: the green transition; innovation and technology; public health; and capacities for science and higher education. But to make it a success, the two blocs should build up the capacity for cooperation along three strands, the universities argue.
The first one is providing support to individual researchers. This has already been piloted under the Arise pilot programme, which
provides European Research Council-style support for early-career researchers. The group wants this pilot to be extended to 2027, in addition to new funding schemes for mid-career and senior researchers. Second, universities ask for investment in joint centres of excellence which would integrate universities and research organisations across Africa and Europe, enabling joint research. Third, they want to see investment in research infrastructures to tackle shortages of appropriate equipment, laboratories and buildings at African universities and research organisations. Better infrastructures would help fight brain drain and speed up innovation.
“What connects these three areas is the focus on strengthening the science system and connecting these three strands,” says Palmowski. But it won’t be easy to push the idea through the Brussels policy mill, with many Commission directorates, from research to international partnerships, involved. “It’s a complex picture, but to do nothing is simply not an option,” he said. And it’s not just about strengthening research collaboration with African countries. The new innovation agenda dictates a new way of managing science cooperation; if successful, it could apply beyond Africa. “This is a really important test bed in how the EU can spearhead new types of collaborative partnerships,” Palmowski said.
Spanish science is now number three in the Horizon Europe funding table
Spain wins 12 of 66 grants awarded in European Research Council’s proof of concept funding round.
Spain has been awarded the most grants in Europe in the European Research Council’s latest proof of concept funding round, picking up 12 grants out of the 66 in total. Second placed Germany won nine grants and the UK and Italy won eight and seven respectively. Spain’s 12 winners cover a range of topics, including treating metastatic cancer and next-generation vaccine development. Three of winners come from the Spanish National Research Council.
Proof of concept grants, worth €150,000, allow researchers to explore commercialisation prospects of basic research they carried out with an earlier ERC grant. Spain also did well in an earlier round of proof of concept funding announced in May, picking up eight of 66 grants, fewer than only Germany and the UK. There will be one more round in 2023, with the ERC handing out a total of €30 million for the year. The awards are a sign that Spanish science is bouncing back after huge cuts were made to the science budget following the financial crisis of 2008. In the past few years Spain has been making
an effort to restructure and rebuild and has moved up to be the number three participating country in the EU’s €95.5 billion Horizon Europe research programme, securing €2.34 billion to date. Only Germany and France are ahead in absolute terms.
Spain-based researchers performed above average in securing proof of concept grants in this round. Of 183 applications, 66 will get funding, a success rate of 36%. Spain put forward 29 proposals with 12 accepted, a success rate of almost 42%. The news was not so good for institutions in the EU Widening countries that have weaker research and innovation systems. Of the 15 countries, only Portugal and Czech Republic won a single grant each. The Czechiabased winner is Michal Otyepka of Palacký University Olomouc whose project is on functionalised graphenes for ink technologies. Meanwhile Ana Rita Cruz Duarte of NOVA University of Lisbon’s project is on ways to improve the shelf life of perishable goods through the stabilisation of vitamins.
Europe heatwave 2023: Extreme heat spirals into wildfires
Floods, fires and heavy rains have landed more blows across Europe this week, with the authorities on the continent scrambling to respond to the extreme weather that has become increasingly common in the past few years. The most recent events have destroyed large amounts of land, left dozens of people injured, forced thousands to evacuate and, in some cases, caused deaths, and they come on the heels of scorching temperatures that have engulfed much of Southern Europe this summer. Climate change has made extreme heat a fixture of the warmer months in Europe, but experts say that the continent has failed to significantly adapt to the hotter conditions. Governments in many countries are now struggling to address the devastating effects. “The extreme weather conditions across Europe continue to be of concern,” Roberta Metsola, the president of the European Parliament, wrote on the social media platform X, formerly known as Twitter. “The EU is showing solidarity with all those in need.”
Heavy rains in recent days have led rivers to overflow across Slovenia in what the authorities there said was the worst natural disaster since the country’s independence in 1991. At least six people have died, according to the Slovenian news agency STA, and thousands have been forced to flee their homes to escape the floods. Entire villages have been left underwater, and huge rivers of mud have filled roads and sports fields and flowed below collapsed bridges, with cars stuck in the debris of landslides caused by the flooding. Ursula von der Leyen, head of the European Commission, the European Union’s executive arm, said she would travel to Slovenia on Wednesday.
Europe is battling the effects of scorching temperatures reaching worrying levels globally, with July being the hottest month recorded on both land and sea. Last year, heatwaves resulted in over 61,600 heat-related fatalities across 35 European countries and triggered devastating wildfires. This year, the temperatures
could exceed Europe’s current record of 48.8 degrees Celsius, recorded in Sicily in August 2021. Elsewhere, the European Union has sent firefighting planes to assist with efforts to tackle wildfires burning on Cyprus in recent days; Greece, which has also been plagued by wildfires this summer, has sent liquid flame retardant to the island to help. Israel has also provided aid, including firefighting planes, a crew of four pilots and ground crews. Jordan and Lebanon also sent support. Heavy rains have been recorded in Norway and Sweden this month, causing the derailment of a train on Monday that left three people injured in eastern Sweden. The police said that the deluge had undermined the embankment where the accident occurred, causing it to collapse.
More downpours were expected in both countries. The Swedish meteorological and hydrological institute said that the amounts of rain that have fallen were unusually high for August in many locations. “Quite a few places have received more rain in one day than you normally get in the entire month of August,” said Ida Dahlstrom, a meteorologist with the Swedish meteorological institute. She added that the city of Lund, in Southern Sweden, had not received so much rain in one day for more than 160 years.
On the other side of the world, Hurricane Dora has contributed to the wildfires in Maui, Hawaii. The strong winds associated with the category 4 storm, which passed to the south of the island chain earlier this week, fanned the wildfires, allowing them to spread faster and further. The fires have destroyed many towns and villages, and in some places people were forced to jump into the ocean to escape. Dozens of deaths have been reported.
North of the European continent records unusually wet and windy summer conditions while Portugal and Spain battle flames.
Delays in European legislation relating to green hydrogen have led energy companies to hold back research projects and look to the US, where the Inflation Reduction Act is providing massive incentives.
Talks on promoting green hydrogen through the Renewable Energy Directive were cancelled on Tuesday 7 February, with MEP Markus Pieper claiming the European Commission had failed to send important information about how much extra renewable energy is needed to produce green hydrogen via electrolysis, in order not to reduce the amount of renewable electricity currently entering electricity grids. The rules on so-called ‘additionality’ were due in December 2021, but the Commission failed to deliver. Interference from member states, especially Germany and France, has been blamed for the continuing delays. The hydrogen industry is waiting for clarification on these points as well as whether renewable hydrogen or ‘Renewable Fuels of Non Biological Origin’ will be linked to the EU’s targets. The lack of direction from the EU has left researchers unsure of where to focus their efforts.
The delays in regulation will have a knock-on effect on how fast green hydrogen production can progress in Europe. In a letter to EU leaders in October 2022, the hydrogen industry warned that urgent increase in renewable electricity is needed to power green hydrogen in line with the EU’s green ambitions.
As it stands, hydrogen represents a modest fraction of the global and EU energy mix, and is still largely produced from fossil fuels, notably from natural gas or from coal, resulting in the release of 70 to 100 million tonnes CO2 annually in the EU. The EU’s decarbonisation ambitions would favour green hydrogen – or hydrogen produced from renewable energy, completely carbonfree – as an end-goal, but it’s widely accepted that blue hydrogen will be a stepping stone on the way to a green hydrogen energy system. Unlike green hydrogen, the energy needed to produce blue hydrogen comes from fossil fuels, but then carbon capture and storage will cut emissions.
However, blue hydrogen comes with a suite of problems – it often runs on the highly potent greenhouse gas methane, which
can leak from carbon capture facilities. One study suggests that blue hydrogen may in fact be 20% worse than just using natural gas. “Blue hydrogen is largely failing and largely unproven,” says Gareth Dale, reader in political economy at Brunel University London. According to Dale, the blue hydrogen route should be scrapped altogether by the EU, with the focus on developing a truly renewable green hydrogen economy instead. Green hydrogen accounts for 0.04% of the global hydrogen mix, but even all of that is not 100% carbon-free, says Dale. “There’s a common tendency to define green hydrogen as any hydrogen produced by electrolysis, but that’s not really green hydrogen,” he says. “If you have a 100% coal powered grid and someone builds a wind farm and produces green hydrogen rather than sending the power to the grid to reduce some of that coal burning, that’s not green at all.” This is where the rules on additionality come into play, and hydrogen researchers need to know exactly how green the Commission expects them to be.
In terms of global hydrogen research trends, a recent report by the European Patent Organisation shows that Europe and Japan remain world leaders in hydrogen innovation according to technology patents - hydrogen patenting grew even faster in Japan than in Europe during the past decade, with compound average growth rates of 6.2% and 4.5% respectively between 2011 and 2020. But that success may be short-lived in light of the new US Inflation Reduction Act. The Act gives clean hydrogen plants tax credits for the first 10 years of operation, but these tax breaks are only in place until 2032, meaning that the sooner companies apply for the scheme, the better. This adds an element of time pressure which may entice European companies to ditch their European R&I efforts as these delays continue and set up shop on the other side of the Atlantic instead. “There is a clear risk that the IRA will attract European green companies to the USA,” says Wallentin. RWE also says that the new IRA opens up attractive opportunities for them, but that this will not be at the expense of their European projects.
Hydrogen research hits regulatory roadblocks in Europe, as companies look to the United States
James Webb space telescope reveals colours of Earendel, most distant star ever detected
A new view of a record-shattering distant star shows it to be twice as hot as our sun, and likely accompanied by a stellar companion.
Astronomers recently set sight sights on Earendel, the most distant star ever discovered, which is so far away that the starlight that James Webb Space Telescope (JWST) caught was produced in the first billion years of the universe’s 13.8 billion-year history. Previous calculations put the star’s distance from Earth at 12.9 billion light-years, but due to the expansion of the universe and the distance light has travelled to get to us, astronomers now believe Earendel is actually 28 billion light-years away.
The name of the star, Earendel — first discovered by the Hubble Space Telescope in 2022 — comes from Old English terms that denote “morning star” or “rising light.” According to observations as a result of the Webb telescope’s discovery, the extremely far-off star, Earendel, is a huge B-type star, more than twice as hot and has a million times greater luminosity than the sun. The star, which belongs to the Sunrise Arc galaxy, was only visible because the huge galaxy cluster WHL0137-08, which is located in the space between Earth and Earendel and magnified the far-off object, made it visible. This process is part of gravitational lensing, a phenomenon that happens when nearby objects function as magnifying glasses for farther away ones. The light of faraway background galaxies is effectively
warped and magnified by gravity. In this instance, the galaxy cluster dramatically increased the brightness of Earendel’s light.
While astronomers weren’t expecting to be able to see a companion star for a massive star like Earendel, the colours picked up by Webb point to the possibility of a cool, red companion star. Webb was able to see details in the Sunrise Arc galaxy by looking into the furthest reaches of the universe and observing in infrared light, which is invisible to the human eye. Small star clusters and regions of star birth were discovered by the space observatory. The actual distance of the Sunrise Arc galaxy is still being ascertained by astronomers through further analysis of the data from Webb’s observation.
Astronomers can learn more about the early universe and get a peek at what our Milky Way galaxy might have looked like billions of years ago by studying extremely distant stars and galaxies that formed closer to the big bang. For astronomers, Webb’s capacity to investigate such a far-off, tiny object is encouraging. The first stars that formed from unprocessed substances like hydrogen and helium shortly after the universe’s creation may one day be seen.
© NASA, ESA, CSA, D. Coe (STScI/AURA for ESA; Johns Hopkins University), B. Welch (NASA’s Goddard Space Flight Center; University of Maryland, College Park). Image processing: Z. Levay. Circled: Experts were able to find Earendel as a faint red dot below a cluster of distant galaxies.
Microplastics found embedded in tissues of whales and dolphins
Analysis indicates ingested microplastics migrate into whales’ fat and organs.
Microscopic plastic particles have been found in the fats and lungs of two-thirds of the marine mammals in a graduate student’s study of ocean microplastics. The presence of polymer particles and fibres in these animals suggests that microplastics can travel out of the digestive tract and lodge in the tissues. The samples in this study were acquired from 32 stranded or subsistence-harvested animals between 2000 and 2021 in Alaska, California and North Carolina. Twelve species are represented in the data, including one bearded seal, which also had plastic in its tissues.
“This is an extra burden on top of everything else they face: climate change, pollution, noise, and now they’re not only ingesting plastic and contending with the big pieces in their stomachs, they’re also being internalized,” said Greg Merrill Jr., a fifth-year graduate student at the Duke University Marine Lab. “Some proportion of their mass is now plastic.”
Plastics are attracted to fats -- they’re lipophilic -- and so believed to be easily attracted to blubber, the sound-producing melon on
a toothed whale’s forehead, and the fat pads along the lower jaw that focus sound to the whales’ internal ears. The study sampled those three kinds of fats plus the lungs and found plastics in all four tissues. Plastic particles identified in tissues ranged on average from 198 microns to 537 microns -- a human hair is about 100 microns in diameter. Merrill points out that, in addition to whatever chemical threat the plastics pose, plastic pieces also can tear and abrade tissues.
Polyester fibres, a common byproduct of laundry machines, were the most common in tissue samples, as was polyethylene, which is a component of beverage containers. Blue plastic was the most common colour found in all four kinds of tissue. A 2022 paper in Nature Communications estimated, based on known concentrations of microplastics off the Pacific Coast of California, that a filter-feeding blue whale might be gulping down 95 pounds of plastic waste per day as it catches tiny creatures in the water column. Whales and dolphins that prey on fish and other larger organisms also might be acquiring accumulated plastic in the animals they eat.
LongITools: Discovering Health Risks from Environmental Exposures for Stronger Policies
We spoke to Professor Sylvain Sebert about LongITools, a project that studies how environmental, biological, and lifestyle factors affect cardiovascular and metabolic diseases. As a part of the European Human Exposome Network, LongITools addresses exposome research challenges, develops models, and collaborates with policymakers for evidence-based recommendations.
The exposome is defined as the measure of all an individual’s environmental exposures in a lifetime and how those exposures relate to health. These factors include a wide range of exposures such as pollution, infections, chemicals, diet, urban and natural environments, socioeconomic factors, and lifestyle. Researchers studying the exposome aim to discover the cumulative effects of multiple environmental factors on health outcomes. This holistic approach moves away from the traditional “one exposure, one disease” approach.
The concept of the exposome, while fascinating, presents several challenges. One of the main difficulties lies in the assessment of the exposome itself. To accurately capture the many dynamic environmental exposures that influence health, researchers must consider hundreds of factors. Capturing the exposome requires numerous measurements using different technologies, making it complex and costly. Another challenge is establishing the nature of the association between multiple exposures and health outcomes. The multi-dimensional correlation structure across the exposome (e.g., air pollution is not independent from other risk factors) makes it difficult for existing statistical methods to accurately identify which and how exposures truly impact health and separate them from correlated exposures. However, ongoing exposome projects are working to overcome these challenges, aiming to improve our understanding of environmental risk factors and develop better prevention strategies.
LongITools is part of the European Human Exposome Network (EHEN), the largest network of research projects investigating the impact of environmental exposure on human health. It brings together nine projects involving 126 organizations across 24 countries, and it has received €106 million in funding from the European Commission’s Horizon 2020 program. Each project has a different focus, but all are investigating multiple exposures including air quality, noise, chemicals, and urbanization, and their association with various health outcomes like cardiovascular diseases, mental health disorders, and respiratory diseases.
Addressing the challenges of exposome research
How is the project facilitating the use of multiple data sets?
The LongITools consortium has access to a vast collection of life-course data, including studies on birth cohorts, longitudinal studies in adults, register-based cohorts, randomized controlled trials (RCTs), and biobanks. These data sets encompass information from 24 studies involving over 11 million citizens from across Europe. They cover a wide range of variables such as height, weight, blood composition, employment, lifestyle factors, cholesterol, and more. A key objective of LongITools is to create a catalogue of longitudinal data sets (life-course cohorts and randomized clinical trials) within a findable, accessible, interoperable, and reusable (FAIR) data infrastructure. This infrastructure will facilitate data discovery, access, and collaboration among longitudinal cohorts and clinical studies in Europe. In order to achieve this a metadata catalogue of harmonized variables has been developed as a result of collaboration with multiple past and ongoing European projects. The ‘European Networks Health Data and Cohort Catalogue’, provides comprehensive information
on the data sets, such as type, population, number of participants, and variables. It includes a findability function enabling users to access rich metadata about the datasets, a harmonization mapping system, manuals to explain to users how to use the catalogue, and standard operating procedures. The catalogue is integrated into the MOLGENIS FAIR data platform.
Since all of these studies have been built independently from each other, there are variabilities in the way the data has been collected. Using more data increases statistical power and considers diverse populations and environments. However, combining different studies or data sets for specific research questions can be challenging. To overcome complexities, the project uses data harmonization to make variables consistent and comparable across different sources. This facilitates combining and analyzing data from multiple studies, ensuring reliability and applicability in research. The researchers hope that the sustainable catalogue approach will inspire other consortia to participate and promote multi-center exposome and health research in Europe. By enhancing the FAIRness of available data, along with other EHEN projects, they strive to accelerate research advancements.
Developing new models to analyze the exposome
In the LongITools project, the challenges of analyzing the exposome are being addressed by the development of different modeling approaches to study the exposome and its complexities. The question that arises is how to identify what type of data is useful and how to potentially develop and recognize different types of environments. Researchers aim to develop models - exposome scores, to analyze the effect of different environmental factors on diseases like obesity or type 2 diabetes. These methods are applied at the population level in multiple countries, including Finland, the Netherlands, and the UK, with efforts to ensure comparability. Longitudinal analysis, which focuses on changes in the environment over time, is also a significant part of the research. Machine learning and other statistical approaches are utilized to analyze the data and uncover dynamics in the environment.
How can the LongITools project monitor and assess an individual’s health risk?
The LongITools Health Risk Assessment System, designed with SMEs, is a modular e-health system used to monitor an individual’s risk of cardiovascular and metabolic diseases. The system comprises three core components, a smartphone app, an environmental hub and remote sensors, and AI-based predictive models. This innovative system allows for longitudinal monitoring of lifestyle and environmental exposure data, providing health risk assessments for individuals and potentially researchers and healthcare professionals. This is currently being piloted in Italy.
Accessing the project’s tools and methodologies
The project aims to share key outputs as widely and as openly as possible. The Exposome Data Analysis Toolbox is a user-friendly online toolbox that enables users to search for and use data analysis tools and visualization methodologies. Researchers will be able to access and use multiple
exposome data analysis tools and methodologies via a single platform and interact with them based on their needs and level of expertise. By increasing usability and interoperability, the toolbox will enhance research and support open science.
How can LongITools help policymakers?
By collaborating with policymakers and key stakeholders, the project aims to provide opportunities that inform the research activity and facilitate the exchange of knowledge. The expected research outcomes of LongITools address the health needs of the population. By defining disease trajectories that occur due to the association between individual and societal health and the environment, the project hopes to educate policymakers about disease risk factors which could ultimately aid in building resilient environments and evidence-based health systems. The project is developing an Economic Simulation Platform, a policy evaluation tool that focuses on assessing and projecting the economic burden related to non-communicable diseases. For this aim, a validated dynamic microsimulation model is used to evaluate “what if” scenarios. This tool is of great importance for policymakers since it provides them with economic analysis and associated costs and benefits. The researchers will translate their findings into evidence-based policy recommendations that will be published in a policy briefing.
“We hope to find ways to design integrated health and environment policies that engage with citizens so that in collaboration, with policymakers, we may find effective and practical solutions that mitigate urban pollutions and climate change. We are observing a lot of effects of noise, air pollution and climate stress on individuals in terms of sleep, mental health, etc. The idea is to find solutions together. For example, what type of solutions could we find in urban design to improve the environment and citizen’s quality of life?” explains Prof. Sebert.
LongITools
Dynamic longitudinal exposome trajectories in cardiovascular and metabolic non-communicable diseases
Project Objectives
The main objective of LongITools is to study the collective impact of environmental, biological, and lifestyle factors on the risk of developing cardiovascular and metabolic non-communicable diseases (NCDs). The project aims to generate new methods and tools for monitoring and predicting environmental exposures and their impact on health. This research will result in innovative key outputs that inform both current and future policies.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 874739.
Project Partners https://longitools.org/partners/
Contact Details
Claire Webster
LongITools Press and Communications Manager
Beta Technology
LongITools Communication, Dissemination and Exploitation Lead/Partner
T: +44 (0) 1302 322633
T: +44 7501 463317
E: claire.webster@betatechnology.co.uk
W: https://longitools.org/
W: www.betatechnology.co.uk
LongITools Project Coordinator Sylvain Sebert , is a Professor of Life-course Epidemiology at the University of Oulu. He is leading the life-course epidemiology research group at the population health research unit and chairing the Northern Finland Birth Cohort executive board. He is also the Programme Director for the International Master of Epidemiology and Biomedical Data Science. His research interests include the exposome, obesity, epidemiology, climate change and health.
Probing the role of metabolites in the immune system
Metabolism can be broadly thought of as the way in which biological systems make and break down molecules for certain purposes, for example nutrition and in constructing the complex biological chemicals that make up living systems. Beyond these functions, evidence suggests that metabolites are also involved in the immune response. “Around twelve years ago immunologists started noticing some very interesting metabolic changes in macrophages, a type of white blood cell,” outlines Professor Luke O’Neill, Chair of Biochemistry at Trinity College Dublin. This suggested that metabolites were involved in certain processes outside their established functions, a topic Professor O’Neill is exploring as Principal Investigator of the ERCfunded Metabinnate project. “We’re working on Krebs cycle, which is the central hub of our metabolism,” he explains. “We’ve mapped the role of Krebs cycle metabolites in some very interesting downstream processes for macrophage function.”
Inflammatory diseases
These macrophages are part of the front line of the immune system, responding quickly when activated to fight an infection or repair a tissue injury. Macrophages also drive inflammation, which can cause longerterm problems if it persists after an infection has been cleared, a topic of great interest to Professor O’Neill and his colleagues. “We’ve worked on the inflammatory process for a long time. We’re interested in inflammatory diseases, such as rheumatoid arthritis, psoriasis and inflammatory bowel disease,” he says. These diseases all involve macrophages essentially going out of control, leading to chronic inflammation, which causes tissue damage. “The symptoms of these inflammatory diseases that we see are down to macrophages - and other parts of the immune system - attacking our own tissues,” continues Professor O’Neill. “In previous work in my lab we identified certain metabolites as being very interesting in terms of their role in regulating inflammation. We wanted to find out more about them and figure out what they do.”
The initial plan was to look at the role of three specific metabolites in the project, namely malonyl-CoA, 2-hydroxyglutarate (2-HG) and itaconate. There is scope to explore other avenues in ERC-funded projects however, and researchers have also made some interesting discoveries about another metabolite called fumarate. “We’ve got a big focus on fumarate at the moment,” says Professor O’Neill. These different metabolites
Krebs Cycle lies at the heart of metabolism but two of its intermediates, succinate and itaconte, have been shown to have roles in immunity and inflammation. Succinate is pro-inflammatory and itaconate is anti-inflammatory, opening up possibilities for new antiinflammatory therapeutics.
A variety of different techniques are being applied in the project to work out what these metabolites do once they have accumulated, with researchers seeking independent lines of evidence to reinforce their findings. Metabolomics screenings are expensive, so are used sparingly, with Professor O’Neill applying other methods to gain deeper insights. “We use Polymerase Chain Reaction (PCR) analysers as a way to
We’ve worked on the inflammatory process for a long time. We’re interested in inflammatory diseases, such as rheumatoid arthritis, psoriasis and inflammatory bowel disease.
were first identified through a metabolomic screen. “We saw that certain metabolites were essentially accumulating during the activation of macrophages. We homed in on those metabolites - our inference was that they were accumulating for a reason, and then affecting things further downstream,” explains Professor O’Neill. “We’re looking at the individual metabolites which go up following macrophage activation, and are trying to figure out how and why they accumulate. The major example is itaconate, which can feed back on the Krebs cycle and allow another metabolite called succinate to accumulate.”
measure mRNA very accurately for example, as well as a range of standard assays and certain microscopy techniques to look at macrophages and see how they change,” he says. In their resting state macrophages are pretty inactive and essentially mind their own business, but they spring into action in response to a tissue injury or the presence of a bacterial cell. “When macrophages are activated they essentially wake up and start to make cytokines, which are signalling molecules. Those cytokines affect many cells in the body and effectively bring them to the party,” continues Professor O’Neill.
Metabolites play an important role in the immune response, beyond their function in making and breaking down molecules. Researchers in the Metabinnate project aim to shed new light on the role of several different metabolites in the immune system, work which could hold important implications for the treatment of inflammatory diseases, as Professor Luke O’Neill explains.
Metabolite levels
This process of macrophage activation affects metabolite levels, which then have wider effects. Two of the metabolites of interest in the project ( 2-HG and malonyl-CoA) are known to have a pro-inflammatory effect, while itaconate is very anti-inflammatory, and the ratios between these metabolites can be very important. “If the ratio swings in a particular direction, that might lead to an inflammatory outcome. While if it’s the other way, it could be anti-inflammatory,” outlines Professor O’Neill. The wider goal in this work is to develop effective new treatments for disease, and Professor O’Neill is confident that research into the metabolism of the immune system (a field called immunometabolism) can lead to new medicines. “I co-founded a company in 2018 called Sitryx which is based in Oxford. They’re building on our research in Dublin and developing new medicines from itaconate in various ways,” he says. “There’s a huge unmet medical need in the treatment of inflammatory diseases.”
There are treatments available for inflammatory diseases but not every patient responds as hoped, so it’s important to develop new, more effective medicines. This rests on further immunometabolism research and deeper insights into the regulation of immune responses, and the project is making an important contribution in this respect.
“We are working on macrophages, which are a key cell type in inflammation, and we’re discovering fundamental processes that seem to go wrong,” says Professor O’Neill. Chronic inflammation is involved in a number of different conditions, reinforcing the wider relevance of this research. “Inflammatory diseases mainly differ in the tissue that’s being attacked – macrophages are involved in them all. With some people macrophages may get irritated in the gut, or in the skin in the case of psoriasis, or in the brain,” outlines Professor O’Neill. “Parkinson’s disease and Alzheimer’s.disease are both inflammatory as well for example, so new insights could lead to therapies against many different diseases.”
The project itself is working on the fundamental level, but Professor O’Neill and his colleagues ultimately aim to help companies develop improved therapeutics. The immunometabolism field is a fairly new area of research, but it’s grown quickly and thousands of labs across the world are now working on different immune cells, sharing knowledge and spurring progress. “If we see something interesting in our system that might lead another lab to re-examine their own, and vice-versa. For example, people have built on our work in itaconate but in the brain,” says Professor O’Neill. “We’re hoping our work will hold broader relevance and apply beyond a particular property.”
METABINNATE
Metabolic crosstalk in the regulation of inflammation
Project Objectives
The Metabinnate project is investigating the role of certain metabolites in regulating the immune response, which represents an important contribution to the growing field of immunometabolism research. This work could help researchers develop effective new treatments against a wide variety of inflammatory diseases.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 834370.
Contact Details
Project Coordinator, Professor Luke O’Neill
Chair of Biochemistry(1960), Biochemistry Trinity College Dublin College Green, Dublin 2
T: +353 1896 2439
E: luke.oneill@tcd.ie
W: https://www.tcd.ie/Biochemistry/ people/laoneill/
Runtsch MC, Angiari S, Hooftman A, Wadhwa R, Zhang Y, Zheng Y, Spina JS, Ruzek MC, Argiriadi MA, McGettrick AF, Mendez RS, Zotta A, Peace CG, Walsh A, Chirillo R, Hams E, Fallon PG, Jayamaran R, Dua K, Brown AC, Kim RY, Horvat JC, Hansbro PM, Wang C,O’Neill LAJ.Itaconate and itaconate derivatives target JAK1 to suppress alternative activation of macrophages.Cell Metab. 2022 Mar 1;34(3):487-501.e8. doi: 10.1016/j.cmet.2022.02.002.
Hooftman A,O’Neill LAJ.Nanoparticle asymmetry shapes an immune response.Nature. 2022 Jan;601(7893):323325. doi: 10.1038/d41586-021-03806-7.
Hooftman, A, Peace C, Ryan DG, Day E, Yang M, McGettrick A, Yin M, Montano E, Huo L, Toller-Kawahisa J, Zecchini V, Ryan TAJ, Bolado-Carrancio A, Casey AM, Prag HA, Costa AS, De Los Santo G, Ishimori M, Walace DJ, Venuturupalli S, Nikitopoulou E, Frizzell N, Johansson C, Von Kriegsheim A, Murphy MP, Jefferies C, Frezza C and O’Neill LAJTargeting fumarate hydratase promotes mitochondrial RNAmedicated interferon production. Nature(2023) 615, 490498 doi: 10.1038/s41586-023-05720-6.
Luke O’Neill is Professor of Biochemistry in the School of Biochemistry and Immunology, Trinity Biomedical Sciences Institute at Trinity College Dublin, Ireland. His main research interests include Toll-like receptors, Inflammasomes and Immunometabolism. He is the co-founder of Sitryx, which aims to develop new medicines for inflammatory diseases.
A proteomic analysis of macrophages in which fumarate hydratase is blocked revealed fascinating changes in protein levels, notably in the protein GDF-15, which regulates food intake but also is anti-inflammatory. Work in progress is exploring this process further.
New approaches to investigating diversity in adaptive immune receptor genes
Hundreds of genes recombine in a combinatorial manner to make B- and T-cell receptors, which are integral to the adaptive immune system. Researchers in the ImmuneDiversity project are investigating the inter-individual diversity of these genes, which will open up new insights into disease susceptibility, as Professor Gunilla
There is a high degree of variability in the genes that encode our adaptive immune receptors across the world, yet most genomics studies and databases have historically focused on populations with European ancestry. Researchers in the ERCbacked ImmuneDiversity project are now working to provide a broader picture. “We are running studies looking at major population groups, including people with backgrounds from sub-Saharan Africa, East Asia, South Asia and Europe,” explains Gunilla Karlsson Hedestam, a Professor in the Department of Microbiology, Tumor and Cell Biology at the Karolinska Institutet, the project’s Principal Investigator. “We started with local volunteers with diverse backgrounds who were interested in the study and provided blood samples, and we then extended the studies to samples collected through collaboration with scientists around the world that also include more focused population groups,” she outlines. “Using our high throughput genotyping method, we also analysed the 1,000 Genomes Project collection, which comprises samples from all over the world.”
Immune system variability
A Senior Research Specialist in the group, Dr. Martin Corcoran has developed the new techniques to define germline-encoded variation in antigen receptor genes employed in the ImmuneDiversity project. Together
with others in the group, he carefully validated these techniques using both wet lab approaches and computational methods, work that has taken many years to complete. With these techniques now at hand, researchers can finally study adaptive immune responses with a high degree of accuracy and at a level of detail that is not possible with traditional
explains.
lot of parallels between B-cells and T-cells in terms of how their antigen receptors are built up,” says Professor Karlsson Hedestam, with building blocks - V, D and J germline genes - that recombine in a combinatorial manner to make up all B-cell receptors (membrane-bound antibodies) and T-cell receptors. “There are hundreds of V, D and J
sequencing methods. The team focuses on Band T-lymphocyte receptors, both of which play important roles in the body’s response to pathogens and are implicated in numerous immune-mediated diseases. “There are a
genes, most of which are highly polymorphic and some genes are missing altogether in some individuals”, she continues. “We aim to define where the polymorphisms are and the frequency of different gene variants in the population. Alleles of a given V, D or J gene may differ from each other by as little as a single nucleotide, but this can be enough to have functional consequences.”
For example, a recent study on SARS-CoV2-specific antibody responses published by the group showed that single polymorphisms can change the way B-cell receptors interact with critical neutralizing target epitopes. Thus, each of us has a different collection of V, D and J alleles, which shapes our immune
“By performing personalized genotyping of B- and T-cell receptor genes, we may be able to piece together more of the puzzle for diseases where adaptive immune responses are known to be involved, such as autoimmune conditions”.
Karlsson HedestamMarco Mandolesi. Photograph by Johanna Åkerberg Kassel.
repertoires and responses. This variation may influence the level of protection our B- and T-cells provide against infections and may also predispose us to the development of certain immune-mediated diseases. “We know for example that multiple sclerosis (MS) and rheumatoid arthritis involve T-cells and B-cells, we just don’t know why some people develop these diseases. One scenario is that infections induce cross-reactive immune responses that not only target the pathogen but also attack self-tissue, and that variations in adaptive immune system genes predispose people to such mis-directed responses,” outlines Professor Karlsson Hedestam.
Two new approaches have been developed in the project to enable researchers to probe deeper into this area, called IgDiscover and ImmuneDiscover; the latter is a high throughput approach which is used to sequence large numbers of people very efficiently. “With ImmuneDiscover we start from DNA samples rather than expressed RNA” explains Professor Karlsson Hedestam. “This approach allows us to generate genetic profiles of thousands of people simultaneously.” By applying this technique
to disease cohorts, the team aims to identify gene variants that make people more vulnerable. “By performing personalized genotyping of B- and T-cell receptor genes, we may be able to piece together more of the puzzle for diseases where adaptive immune responses are known to be involved, such as auto-immune conditions”, says Professor
Karlsson Hedestam. A deeper understanding of whether certain alleles are associated with auto-immune disease would enable researchers to identify high-risk groups. “These could be very important prognostic markers that could then be added to information about the patient’s HLA-type,” says Professor Karlsson Hedestam.
Analysis of B- and T-cell receptor V, D and J gene diversity in populations with different genetic ancestries
IMMUNEDIVERSITY
Defining human adaptive immune gene diversity and its impact on disease
Project Objectives
The aim of the project is to define human population diversity in the hundreds of germline V, D and J genes that encode our B- and T-cell antigen receptors, and to investigate how this diversity influences our responses to infections and vaccination or predisposes to the development of autoimmune diseases.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant agreement ID: 788016. In addition to the ERC Advanced grant, the work is funded by a Distinguished Professor grant from the Swedish Research Council.
Project Partners
For the work on rheumatoid arthritis, we are collaborating with Drs. Malmström and Padyukov at Karolinska Institutet.
Contact Details
Gunilla B. Karlsson Hedestam Professor
Department of Microbiology, Tumor and Cell Biology
Karolinska Institutet
171 77 Stockholm
Visiting address:
Biomedicum C7, C0767, Solnavägen 9 E: Gunilla.Karlsson.Hedestam@ki.se W: https://ki.se/en/mtc/gunilla-karlssonhedestam-group
Cohort studies
The first priorities for the project were to develop the analytical methods and ensure that they were highly robust, reproducible, and precise. Once this was established, the team moved on to identify new alleles and define the frequencies at which they occur in persons with different population ancestries. This work establishes an important baseline, a picture of normal frequencies, which will provide a basis for comparison with disease cohorts, says Professor Karlsson Hedestam. “We think that with our soon to be completed studies on human populations from our own sample collection and the 1,000 Genome Project samples, we are starting to reach saturation point. We will have found most variants except very infrequent ones,” says Professor Karlsson Hedestam.
In the longer run, information about adaptive immune receptor genotypes could be very useful for clinicians, enabling the stratification of patients into higher and lower-risk groups, while it could also be highly valuable for pharmaceutical companies. It may be possible to specifically target genes associated with a disease for example, with the project helping to lay the foundations for further development in this regard. “There is huge potential in this work, but we must do the groundwork first. These gene regions are extremely complicated, and they cannot be completely sequenced with conventional methods,” stresses Professor Karlsson Hedestam.
Gunilla Karlsson Hedestam received her BSc from Uppsala University in 1990 and PhD from University of Oxford in 1993. After a post-doctoral fellowship at Harvard 1994-1998, she joined Karolinska Institutet where she became tenured Professor in 2012. She is a member of the Royal Swedish Academy of Sciences since 2019.
“This work is now almost complete, and we are ready to start pushing through disease cohorts,” says Professor Karlsson Hedestam. This will involve genotyping samples from patients with specific conditions alongside healthy controls. “We will focus increasingly on cohort studies in the coming years,” she continues. When an association is identified for a given disease, the team will investigate the functional impact of the polymorphism using in vitro cell culture systems and structural analysis to unravel the molecular basis for the effect.
The project’s research also holds relevance to the goal of adapting or tailoring vaccines to the genetic profile of specific populations, as vaccines can be more effective in some areas of the world than others. “If certain allelic variants of genes are needed to produce protective antibodies, and they are present at different frequencies in different populations, it could mean that a vaccine could work well in Europe but less well in Asia,” explains Professor Karlsson Hedestam. A deeper understanding of the genetic background of local populations could enable the development of more tailored and effective vaccines against disease.
Unravelling the molecular mechanisms behind tumourigenic effects
The TGF-ß cytokine is a prototype in a large family of growth regulatory factors, and it plays a number of important roles during tumour progression. Researchers are investigating the molecular mechanisms behind how TGF-ß activates its pro-tumourigenic signalling pathways, which could help uncover new targets for drug development, as Professor Carl-Henrik Heldin explains.
The Transforming Growth Factor-ß (TGF-ß) cytokine performs a variety of different roles in tumour progression. It makes tumour cells more invasive and prone to forming metastasis, while it also has other pro-tumourigenic effects. “For example. it inhibits the immune system, and promotes angiogenesis, as well as the development of cancer-associated fibroblasts,” outlines Carl-Henrik Heldin, Professor in Molecular Cell Biology at Uppsala University. As the head of an ERC-backed project based at the University, Professor Heldin is investigating the underlying mechanisms behind these pro-tumourigenic effects, which contrast sharply with its role before a tumour is established. “Initially TGF-ß is actually a tumour-suppressor, as it inhibits the growth of most normal cells and induces apoptosis of many cell types. These
Mechanism by which TGFb activates Src
cancer is a fairly benign disease, and it only really threatens us when it starts to invade the surrounding environment and spread through the body.”
Initially TGF-ß is a tumour-suppressor, as it inhibits the growth of most normal cells and induces apoptosis of many cell types. As a tumour progresses there is a kind of switch, which converts TGF-ß from being a tumour-suppressor to a tumour promoter.
are tumour-suppressive effects,” he explains. “As a tumour progresses there is a kind of switch, which converts TGF-ß from being a tumour-suppressor to a tumour promoter.”
TGF-ß inhibitors
This means a great deal of care needs to be taken in treating tumour patients with TGF-ß inhibitors and avoiding unintended side-effects. The wider aim for Professor Heldin and his colleagues is to help find ways of selectively inhibiting the protumourigenic signalling pathways in tumour cells, while leaving the tumour-suppressive pathways unperturbed. “We are trying to figure out the molecular mechanisms behind why TGF-ß activates various signalling pathways,” he says. The vast majority of cancer-related deaths are caused by the cancer spreading, so an effective method of inhibiting or slowing down that spread could have a significant impact. “A large group of patients could benefit,” stresses Professor Heldin. “For example, in most cases prostate
There are TGF-ß receptors on essentially all cell types, reinforcing the wider relevance of this research. Perturbation of TGF-ß signalling is also involved in several other diseases, including certain inflammatory conditions and fibrotic disorders, as Professor Heldin explains. “TGF-ß is a very powerful stimulator of matrix proteins for instance, a process that is involved in fibrosis. This has stimulated a lot of interest, and TGF-ß inhibitors have been discussed as possible therapeutic agents in fibrosis,” he says. This work holds wider relevance to the pharmaceutical industry, and Professor Heldin’s research represents an important contribution to the goal of developing more effective treatments. “We eventually hope to provide targets for drug discovery,” he continues. “However, the whole process of drug development cannot really take place in an academic setting – we need further input and collaboration.”
Progress has been made over the course of the project in terms of selectively inhibiting
the pro-tumourigenic pathways, and Professor Heldin hopes to build on this further, with plans in the pipeline for continued investigation. “We are pursuing several lines of research into how TGF-ß activates various signalling pathways, and we hope to make more progress in future,” he says.
SelectiveTGFb-inhib
Pro-tumorigenic effects of TGFbelucidation of mechanisms and development of selective inhibitors
Carl-Henrik Heldin
Professor in Molecular Cell Biology
Dept. of Medical Biochemistry and Microbiology
Box 582, Biomedical Center
Uppsala University SE-751 23 Uppsala, Sweden
T: +46-18-4714738
E: C-H.Heldin@imbim.uu.se
W: https://www.katalog.uu.se/ profile/?id=N96-1274
Carl-Henrik Heldin is a Professor in the Department of Medical Biochemistry and Microbiology at Uppsala University. His main research interest is the mechanisms of signal transduction by growth regulatory factors, as well as their normal function and role in disease.
Personalised Screening Strategies: RISCC project Redefines Cervical Cancer Prevention
Cervical cancer is a significant threat to women worldwide. Despite the availability of preventive measures, the incidence of this disease remains high, claiming thousands of lives each year. In Europe alone, there were 61,000 new cases and 25,000 deaths from cervical cancer in 2018 and cervical cancer incidence continues to rise in several Central and Eastern European regions without organised screening. Additionally, the Netherlands, Sweden, and Finland, where organised screening programs have resulted in positive outcomes in the past, have also witnessed an increased incidence in the last decade. This demonstrates the necessity for more effective screening strategies.
We spoke to Dr. Johannes Berkhof, coordinator of the RISCC (Risk-based Screening for Cervical Cancer) project.
Funded by the European Union’s Horizon 2020 Framework Programme for Research and Innovation, RISCC brings together a diverse consortium of leading researchers in the field of HPV, HPV screening, and HPV vaccination. The consortium’s primary objective is to develop and evaluate Europe’s first risk-
based screening program for cervical cancer. Additionally, RISCC aims to provide opensource implementation tools and contribute to the goal of eliminating cervical cancer.
Current screening programs have a high rate of unnecessary colposcopy referrals and a varying number of screening invitations across European countries. These inefficiencies highlight the potential for significant cost savings, estimated at €100 million across Europe. To address these challenges, RISCC advocates for a paradigm shift from the current one-size-fits-all approach to an individualised risk-based screening protocol. In the new approach, the clinical management after an initial screening test will be guided by the risk of cervical cancer, defined by test results, screening history, age, vaccination status and other potential risk factors. “Why would we want to do cervical cancer screening based on individual risks? So far, most of the programs have all used a one-size-fits-all program. So, irrespective of your risk, you always get the same test at fixed intervals. But some women have a much higher risk than other women,
and we would like to account for that. This would lead to less unnecessary activity in low–risk women and lower rates of anxiety. It is also cost-effective because if we invest €2 per woman, we would be able to avoid 12,500 cervical cancer deaths in Europe every year. That is a substantial reduction in the number of cervical cancer deaths” explains Dr. Berkhof.
In order to create a risk-based screening program, it is necessary to develop risk profiles for cervical cancer:
Risk profiles in unvaccinated birth cohorts
Cervical cancer screening has been traditionally done through cytological screening with the Pap test. However, this cancer is caused by the human papillomavirus (HPV) infection and its detection via HPV testing has shown significant improvements over the Pap in cervical cancer screening. Data from screening cohorts in Europe and North America demonstrates that switching to HPV screening reduces the risk of precancerous conditions by approximately 70% after a negative screen.
Cervical cancer screening programs have been implemented globally but often follow a “one-sizefits-all” approach. This leads to suboptimal protection, suboptimal resource allocation, and harms. We spoke to Dr. Johannes Berkhof, the Coordinator of the RISCC project which aims to develop a risk-based screening using screening history, HPV vaccination status, and other relevant risk factors.The RISCC consortium meeting held in Autumn 2022 in Amsterdam.
By analyzing data from previous screening rounds, the researchers have observed that the risk of cervical cancer drops after multiple negative test results compared to just one negative HPV test. However, “Various studies show that the risk of pre-cancer after a negative HPV test is about seven times higher if it is preceded by a positive HPV test in the previous round. This illustrates that taking into account the result of the HPV test in the previous round can optimize your screening program” explains Dr. Berkhof.
Data on screening results is collected from seven different HPV screening trials conducted across Europe, including over 150,000 women from the previous CoheaHr project as well as data from screening programs. Some of these trials compare cytology-based screening to HPV-based screening, and they provide the project with new follow-up data spanning up to 20 years after enrollment. Additionally, data is
Influence of other risk factors in risk profiles
There are other risk factors such as poverty, smoking, use of oral contraceptives, and sexual behaviour that may influence the risk of getting an HPV infection or developing (pre)cancer. The researchers will summarize the evidence on the potential significance of these risk factors for a cervical screening program.
Cost-effectiveness
“After we have collected all that evidence, we will use all these risk estimates to develop a model for Europe, that enables us to assess the health benefits of different risk-based screening strategies and weigh them against the cost of implementing these strategies. By tailoring the screening approaches to local European settings, we can identify cost-effective riskbased screening strategies that suit local settings and optimize resource allocation” explains Dr. Berkhof.
RISCC Risk-based Screening for Cervical Cancer Project Objectives
RISCC aims to introduce a risk-based screening approach to improve the effectiveness and efficiency of cervical cancer screening. By creating risk profiles using data from large trials and screening programs in the general population, the project will develop a more targeted and personalised screening strategy. The goal is to optimize screening algorithms and improve the identification of high-risk individuals for early intervention. This will ultimately contribute to the elimination of cervical cancer as a public health problem.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 847845.
Project Partners
https://www.riscc-h2020.eu/about-riscc/partners/ Contact Details
RISCC daily management:
Professor Johannes Berkhof, Project Coordinator
Dr. Laurian Jongejan, Program Manager
utilised from HPV self-sampling trials that compare HPV testing on a sample taken at the clinic by a healthcare professional to HPV testing on a sample taken at home using a self-sampling kit. These selfsampling trials have a follow-up period of approximately ten years. Furthermore, the samples from the self-sampling trials will be used to evaluate the use of molecular tests, specifically DNA methylation, to identify (pre)cancer in HPV-positive women. Currently, HPV-positive women need to visit the healthcare professional for a Pap smear. The use of molecular markers, such as DNA methylation, provides an appealing method to stratify HPV-positive women, as it can be directly performed on the selfcollected specimen.
Risk profiles for HPV-vaccinated birth-cohorts
Risk profiles are also developed for vaccinated birth cohorts. “Vaccination is an important risk factor. We are collecting data from a randomised trial conducted in Finland that involves 80,000 women with known vaccination status who were randomly assigned to either frequent or infrequent HPV screening. We also use observational data from countries where the individual HPV vaccination status is linked to screening outcomes. This allows us to examine the direct and indirect effects of vaccination on screening” says Dr. Berkhof.
Pilot implementation trial
Digitalization facilitates implementation of risk-based screening by incorporating patient-specific risk factors and providing personalised recommendations. The project will implement a risk-based screening strategy in a pilot study conducted in Sweden. This strategy will make use of a digital invitation platform, allowing individuals to enroll for risk-based screening and receive invitations through a linked app connected to the screening management database. The pilot trial in Sweden will assess the impact of the digital invitational system on attendance, acceptability, and cost savings in risk-based screening.
Training and dissemination
As part of the tools to facilitate implementation, the consortium is developing an e-learning open-access course through the e-oncologia platform (https://www.e-oncologia.org) on HPV-related content and cervical cancer management. It will teach healthcare professionals and screening managers how to implement HPV risk-based screening. The dissemination of the project’s results via social media and newsletters with HPV-related content, and ESGO-ENGAGe involvement with gynecological cancer survivors aims to increase HPV awareness and the need to improve current screening practices among researchers and the general population.
Amsterdam University Medical Centers, location VUmc, Amsterdam, The Netherlands
E: riscc@amsterdamumc.nl : @RISCC_H2020
W: https://www.riscc-h2020.eu/
Prof. Berkhof is head of the Department of Epidemiology and Data Science in Amsterdam University Medical Center. He has over 20 years’ experience in statistical and modelling research with over 200 articles on HPV, methodology, cost-effectiveness, screening, public health and statistics.
Dr. Robles is an epidemiologist at the Cancer Epidemiology Research Program within the Catalan Institute of Oncology. She has over 10 years’ experience in cancer epidemiology and prevention, mainly focused on cervical cancer over the last years.
“Our aim is to help countries and policymakers to develop their HPV screening programs that incorporate various aspects, all rooted in the concept of risk-based screening.”Prof. Johannes Berkhof Dr. Claudia Robles
A better approach to follow-up care
We spoke to Dr. Maria Torrente, who is the Coordinator of the project CLARIFY. CLARIFY aims to identify personalized risk factors that influence the patient’s outcome at the end of their oncological treatment. The CLARIFY Platform, has been developed to allow healthcare professionals to understand, work with, and make decisions based on real data analysis from patients.
Diagnosis and treatment of cancer have seen great advancements in recent decades. More than half of adult patients who suffer from cancer will live at least 5 years in the US and Europe. This has created a new challenge for healthcare providers - ensuring the long-term quality of life and well-being of cancer survivors following their oncological treatment. The EU-funded CLARIFY project is addressing this challenge. CLARIFY focuses on collecting and analyzing clinical, genomic, and behavioral data from survivors of three specific types of cancer: breast, lung, and lymphoma. Using the power of big data and artificial intelligence techniques, CLARIFY integrates data with relevant publicly available biomedical information and data gathered from wearable devices used by patients post-treatment. By analyzing this abundance of information, the project aims to predict the patient-specific risks of developing secondary effects and toxicities resulting from their cancer treatments and identify patients who are at the highest risk of psychosocial dysfunction. The goal of the CLARIFY project is to transform cancer survivors’ care by providing enhanced, personalized treatment options. The project seeks to empower healthcare professionals with valuable insights into individual patient risks. This knowledge will enable them to deliver better-tailored care, leading to improved quality of life and overall well-being for cancer survivors.
In recent years, the number of cancer survivors has notably increased thanks to remarkable advancements in cancer diagnosis and treatment. However, the challenge of ensuring a high quality of life for cancer long-survivors following the posttreatment phase persists. One of the issues that clinicians face is the lack of adequate follow-up models. The current follow-up models have not progressed at the same pace as the advancements in treatment effectiveness. Clinicians have to rely on expert consensus rather than evidencebased guidelines. The primary focus in the current healthcare model is placed on the management of treatment toxicities, rather than the long-term secondary effects after treatment. However, cancer survivors face various unmet physical, functional, and psychosocial needs that impact their overall quality of life and survival outcomes. Secondary effects include late toxicity, secondary tumors, psychological dysfunction (e.g., anxiety and depression), and physical morbidity. Social problems that are often encountered include adaptation to life after cancer, unemployment, inequities by sex, lifestyle, and prevention, and toxic habits. All of these factors have an impact on the disease process. Therefore, it is necessary to take mental health, quality of life, and activity data into account when the goal is improving the patient’s well-being and the disease outcome.
CLARIFY Digital Decision Support Platform
CLARIFY aims to improve the well-being and quality of life of cancer survivors through early identification and discovery of risk factors that may deteriorate a cancer survivor’s condition after the end of treatment. The CLARIFY Platform stratifies survivors based on risk allowing clinicians to design personalized followup and supportive care protocols.
This is achieved through collecting and analyzing anonymized clinical and genomic data, data from wearable devices (circadian rhythm, physical activity), and electronic quality-of-life surveys.
This information is integrated with existing biomedical knowledge from public repositories and biomedical literature. To analyze the vast and diverse data, CLARIFY employs state-of-theart predictive models and biomedical knowledge graphs. The findings are then incorporated into the CLARIFY Digital Decision Support Platform (DDSP), a clinical decision support system. “With the CLARIFY Platform we can analyze our patients’ data in real-time,” says Dr. Maria Torrente. The CLARIFY Platform has the ability to extract and present information from various data sources in a clear and organized manner. It ensures that the information is easily readable and structured. The clinician can choose to analyze the entire patient population or focus on an individual patient. Through the Platform, the clinician can also obtain the patient’s profiling and risk stratification, which will identify factors
for poor prognosis. The goal is to improve patient care efficiency and personalize the approach, supporting the complex clinical decision-making process with scientific evidence. “The CLARIFY Platform is a tool to be used by clinicians and that was our main challenge: to have two different communities, clinicians and knowledge discovery researchers, talk and work together. That´s our hardest task” states Prof Pedro Sousa.
The CLARIFY Digital Decision Support Platform and the improved semi-automated annotation for analysis of circadian rhythms have been selected as KEY INNOVATIONS by the European Commission Innovation Radar. The Platform has been showcased at various events to the healthcare authorities and the pharmaceutical industry to discuss the opportunities it can offer to the public healthcare system and its potential to support oncological patients.
CLARIFY collaboration with patients associations
Patient education programs have demonstrated that enhancing patients’ understanding of their health issues positively affects patient health outcomes by improving compliance with treatments, alleviation of anxiety, and enabling patients and their families to be included actively in their care. In the CLARIFY research project, the Spanish Cancer Patients Association (GEPAC) and the Hospital Universitario Puerta de Hierro-Majadahonda (HUPHM) offer a Cancer Patients Education Program that empowers and supports patients in their own care. The contents of the program are based on CLARIFY´s results, HUPHM patient care expertise, and GEPAC´s experience to help patients. The aim is to help patients and their families navigate through the challenges of a cancer diagnosis and treatment, and easily transition into survivorship. The sessions provide participants with valuable skills, access to validated resources, and opportunities to connect with others. The program includes monthly sessions focused on supporting patients’ health and well-being as they transition from clinical treatment to recovery and wellness covering topics such as physical exercise, nutrition, employment protection & legal resources and emotional well-being. In this framework, several sexuality workshops have also been organized to address an important issue that directly affects the patient’s quality of life but still remains a taboo subject among cancer patients - alterations in sexual life. CLARIFY researchers did a study on the alterations in sexual function in long-cancer survivors based on quality-
of-life questionnaires. The results showed a high frequency of altered sexual function in surviving patients, and highly variable results in the degree of sexual satisfaction and the frequency of sexual activity, depending on the patient’s sex and type of neoplasm. Up to 36% of breast cancer patients reported alterations in their sexual life, 15% in lung cancer patients, and 10% in patients suffering from lymphoma. “The evaluation of these alterations has helped us to incorporate them into clinical practice and develop both personalized and group-level interventions responding to the patient´s needs” explains Dr. Torrente.
CLARIFY is part of the “Cancer Survivorship – AI for Well-being” Cluster which consists of 11 EU-funded projects working in artificial intelligence for healthcare and well-being with the aim of transcending the individual project experiences. The activities organized, which encompass several Meeting of Minds, roundtable discussions, podcasts, a better practices guide, a common data space, and the White Paper itself, have proven a wonderful initiative to generate, activate and communicate knowledge and innovation in the areas of mental health, wellbeing, cancer recovery, patient support, and participatory research.
Consortia members
The innovative Platform has been developed by combining the expertise of partners from 5 different European countries. The consortium is composed of 12 partners: Hospital Puerta de Hierro-Servicio Madrileno de Salud (Spain); Universidad Politécnica de Madrid (Spain), Technische Informationsbibliothek -TIB(Germany), Holos Soluções Avançadas em Tecnologias de Informação, (Portugal); National University of Ireland (Ireland), University College London, (United Kingdom), University College Dublin (Ireland), Accenture Global Solutions (Ireland); Grupo Oncológico para el Tratamiento de las Enfermedades Linfoides-GOTEL- (Spain); Stelar Security Technology (Germany), Grupo Español de Investigación en Cáncer de PulmónGECP- (Spain); Kronohealth (Spain).
CLARIFY
Cancer Long Survivor Artificial Intelligence Follow-up
Project Objectives
CLARIFY proposes to perform the integration and application of AI to facilitate the early discovery of risk factors that may deteriorate a cancer patient condition after end of treatment.
CLARIFY will analyse patient’s clinical, genomic, behavioural data and existing open data in order to determine a follow-up adapted to the individual needs of each group of. The development of a clinical-decision-support platform will allow clinicians to stratify patients by risk, design a patient-specific follow-up and supportive care protocol, and prevent these secondary effects and toxicities, which affect long cancer survivors’ wellbeing and quality of life.
Project Funding
The Clarify project has received funding from European Union’s Horizon2020 research and innovation programme under grant agreement number 875160.
Project Partners
https://www.clarify2020.eu/about_us/
Contact Details
Érika Cerrolaza
Project Manager
Medical Oncology Department
Hospital Universitario Puerta de Hierro
E: pm-clarify@idiphim.org
W: https://www.clarify2020.eu
Dr. Mariano Provencio is Principal Investigator of the CLARIFY project. He is the Chief of the Medical Oncology Department at the Hospital Universitario Puerta de Hierro and a Full Professor of Medical Oncology at the Faculty of Medicine of the Autonomous University of Madrid. He is also the the President of the Spanish Lung Cancer Group and the Spanish Lymphoma Oncology Group and Academic Fellow at the Royal National Academy of Medicine. He has an extensive research activity with more than 200 studies focusing mainly on lung cancer and lymphoma.
“With the CLARIFY Platform we can analyze our patients’ data in real-time”
A deeper picture of vaccine effectiveness
Researchers in the FuncMAB project are developing high-throughput methods to analyse the functionality induced by individual antibodies, helping to build a deeper and more resolved picture of protection on the single-antibody level, seeking answers to why people respond differently to vaccination, as Dr. Klaus Eyer explains.
The method by which the effectiveness of vaccines is assessed has remained largely unchanged for over a century. Essentially, a blood sample is taken from an individual following vaccination, and the concentration of antibodies is measured. “We check whether an individual has antibodies or not on the cellular level,” explains Klaus Eyer, a Professor in the Department of Chemistry and Applied Biosciences at ETH Zurich. As the Principal Investigator of the EU-funded FuncMAB project, Professor Eyer is now looking at this from a different angle. “We know that these antibodies are produced by cells, and that a healthy human being has around 5,000 different antibodies against various pathogens in their bloodstream. We want to analyse every antibody, by itself, for its functionality, its capacity to protect,” he outlines. “It helps here that every cell basically produces – at a given time point – only one antibody. So if we analyse individual cells, we can analyse individual antibodies, and then try to reassemble our functional analysis into a global measurement of protection and its duration.”
Vaccine protection and functional antibodies
This enhanced resolution on the level of individual antibodies could lead to the identification of signatures that allow researchers to better determine who is protected by a vaccine , and to understand why certain individuals are not protected after vaccination. While a vaccine that stimulates the production of a large quantity of antibodies is often highly effective, this is not invariably the case. “We know some vaccines that have been shown to be effective don’t actually stimulate the production of a lot of antibodies. There are also examples of vaccinations that didn’t protect, despite leading to quite a high antibody response,” explains Professor Eyer. Within the project, Professor Eyer and his colleagues are working to build a deeper picture of the antibodies produced following vaccination and their effectiveness through research conducted largely on mice for the moment to develop suitable assays and understand the basic mechanisms of antibodies. “It’s not enough simply to have antibodies, they have to do something,” he stresses. “If you are infected by a virus, you need antibodies to neutralise it. For a bacterial infection, you need to eliminate
it by activating the innate immune system using antibodies that are able to do so.”
The primary aim in the project is to develop high-throughput assays that allow researchers to measure the functionality of single antibodies in this respect. This project is not about developing a vaccine against a specific disease but rather building a deeper understanding of how the protection provided by vaccines against different diseases can be measured. “For example, for viruses, we can measure how many neutralising antibodies you generate, in relation to all the antibodies as a whole. In a further step, we can then work on increasing this proportion. For example, a simple but complicated question: How can you induce more neutralising antibodies for a longer time? It all starts being able to precisely measure the antibodies.” says Professor Eyer. “We’ve also worked on a bacterium that can infect the brain and cause an inflammatory reaction. We know that we need to have antibodies that activate the complement system against this bacteria. We can look to measure how many antibodies an individual has. Is there a way to increase the amount of antibodies to better protect individuals?”
FuncMAB project
Researchers in the FuncMAB project are developing microfluidic technologies and specific assays, hoping to open up new insights into these kinds of questions. This work involves taking individual, antibody-producing B-cells, which are then encapsulated in microdroplets. “We use a microfluidic technique to create small containers around 50 picolitres [1012 of a litre] in size, and whatever antibodies the cell produces will stay within the small container. Within the containers, we have systems in place allowing us to measure the concentration and functionality of the antibodies created,” explains Professor Eyer. By separating individual cells and keeping them in a controlled environment, researchers can assess the quantity and quality of a specific antibody produced by a single cell at a given time. “If you have two cells that produce antibodies, then you’re going to measure a mixture of the two, so it’s important for us in the project to keep them separate,” says Professor Eyer.
This approach is now being used in the project to investigate the various different antibodies that can be produced in the body. Each of the estimated 5,000 or so antibodies in a healthy individual is comprised of different sequences of amino acids, and the nature of that sequence determines what antigen they recognise. “An individual will have a certain set of antibodies against tetanus, let’s say, a certain set of antibodies against diphtheria, and so on. For every antigen you encounter, you produce a repertoire of antibodies,” says Professor Eyer. There are around 100 different sequences of antibodies in the body capable of binding to a specific antigen, which adds up to a highly complex overall picture. “An individual
can generate around 1012 different sequences, or even more, so there’s a huge space that your adaptive immune system can explore,” continues Professor Eyer. “Typically certain solutions are more likely to ‘win’ than others. Therefore, the team focuses their analysis on the functionality, binding strength, specificity and functionality – parameters that cannot (yet) be easily linked to the amino acid sequence.”
Researchers are both measuring the quantity of antibodies produced by a cell and also analysing the quality. There is a high degree of variability in the amount of antibodies that different cells
Lastly, the quantity and quality of antibodies is not stable over an immune response. The immune response is highly dynamic, and there are many different factors to take into account when following these functional repertoires of antibodies over time. Professor Eyer and his colleagues change variables like the vaccine composition or the day of analysis, then assess the condition of the cells, making use of the two points of flexibility of the murine model system.. “In individuals undergoing a second immunisation, we see a lot of reactivation of previously formed cells early
may produce within a given timeframe. “Some cells maybe produce 1-2 antibody molecules a second, while there are cells that can produce between 5,000-6,000 in a second,” outlines Professor Eyer. In terms of quality, Professor Eyer and his colleagues consider how well an antibody binds to the health threat, whether it is a virus, bacteria or toxin. “How well does the antibody bind to, say, Epstein-Barr virus? Does it only bind to Epstein-Barr? Does it neutralize? What you want is a specific response that produces the best exchange binding with functionality possible against your antigen that was used for vaccination,” he says. “We look at both the strength of the binding and also the concentration of the antibodies. If you have a lot of weakly-binding antibodies, they can still be very efficient in terms of helping to eliminate an antigen.”
in the immune response. These cells then go back into the secondary immune response,” he says. Building on this work, researchers can then make predictions on how these cells will function in future. “We can see where the cell is, and by following its trajectory, we can identify which cells are most likely to survive and which will probably die in the next 2-3 weeks. We can effectively monitor their health, and also therefore aim to predict the duration of protection,” continues Professor Eyer.
Vaccine variability
This research represents an important contribution to the wider goal of understanding why vaccines work effectively in some people and not others, which is a step towards improving how efficient functional antibody responses are generated, and what defines the
We know that antibodies are produced by cells, and that a healthy human being has around 5,000 different antibodies against various diseases in their bloodstream. We want to analyse every antibody, by itself, for its functionality.
duration of protection for individuals who don’t respond to current vaccines. While vaccines are an important part of health protection, a certain proportion of the population do not respond effectively. “With hepatitis B for example, we see that around 10 percent of people who get vaccinated are not protected from the vaccination. We want to understand how we can use knowledge gained from these assays, this way of quantifying functionality, to design new vaccines for people who do not respond to a standard vaccine,” outlines Professor Eyer.
The hope is to eventually move towards a more targeted approach to vaccination, rather than administering the same vaccine to everybody. “For example, individuals with a certain risk profile might benefit from assembly A, while others might benefit from assembly B,” explains Professor Eyer. “This could help us to reduce severe side effects even further, and induce optimal responses for each individual.”
The project’s findings are unlikely to be immediately translated into clinical development, as this research is still at a fairly early stage. However, Professor Eyer has cofounded a spin-off company called Saber Bio SAS building on some of the project’s work, and there is wider interest in using these single-cell assays. “We have also started to work together with clinicians in autoimmune and –inflammatory diseases,” he says. Here, these areas might profit from novel technologies that allow to quantify the quality
of immune responses, albeit in these unwanted field results. In terms of the projects beyond FuncMab, the group works on developing new assays with the ultimate goal of improving the diagnosis and personalised treatment of autoimmune and inflammatory diseases. “We hope to finish our experiments in early 2024, and then we can look to build upon our findings,” he continues. “Open Science is important to use so that other researchers can use our assays, either by themselves, through clinical cooperations, or via Saber Bio. This includes not only looking at antibody producing cells, but a lot of other cells and functionalities as well, which we hope will have a positive impact on the lives of patients and their families.”
Fluorescent image of droplets. The fluorescent line indicates the presence of a functionally active cell, in this case of an antibodysecreting cell in the droplets.
FunCMAB
High-throughput single-cell phenotypic analysis of functional antibody repertoires
Project Objectives
Different threats (viruses, bacteria, toxins) require distinct functional repertoires for effective immune responses. We propose monitoring and quantifying these functions on the monoclonal antibody level to assess vaccine candidates to tailor vaccines for optimal responses. The project aims to develop new analytical approaches for vaccine development and comprehension of vaccine-induced protection.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant agreement ID: 803363.
Project Partners
• Olivia Tamara Mariann Bucheli
• Kevin Portmann
• Ingibjörg Sigvaldadottir
• Nathan Aymerich
• Alessandro Streuli
• Dr. Magda Rybczynska
Contact Details
Brightifeld image of an individual cell in a 50 pL droplet containing magnetic nanoparticles that form a beadline. This assembyl can be then used to measure antibody secretion efficiently (see above).
Project Coordinator, Dr Klaus Eyer
Functional Immunerepertoireanalysis
HCI H433; Vladimir-Prelog-Weg 1-5/10
8093 Zürich, Schweiz
T: +41 44 633 74 57
E: klaus.eyer@pharma.ethz.ch
W: https://eyergroup.ethz.ch/news-andevents.html
Professor Klaus Eyer is an assistant professor in the Department of Chemistry and Applied Biosciences at ETH Zurich where he leads a group developing and applying novel analytical strategies for the direct, quantitative, and deep-phenotypic dynamic analysis of individual cellular functions. He holds a Doctorate in BioAnalytical Analysis.
Revolutionising Preterm Infant Care
The Perinatal Life Support (PLS) project envisions a groundbreaking solution for extremely premature infants. The innovative system replicates the womb’s protective environment, incorporating an artificial placenta, computational models, and fetal monitoring mechanisms. Pioneered by an interdisciplinary consortium, it brings together cutting-edge technology and medical expertise for improved neonatal outcomes.
Preterm birth is a leading cause of perinatal and neonatal mortality, posing lifelong morbidity risk for infants. The most vulnerable are extremely preterm infants who are born before 28 weeks of gestation. In transitioning from the protected fetal environment to the harsh realities of neonatal life, they encounter tremendous challenges due to their underdeveloped organs and face the potential of lifelong disabilities, spanning cardiovascular, neurological, breathing, and metabolic problems. Some of the potential complications that occur during this critical phase include heat loss, respiratory challenges, circulatory disruptions, nutrition deficiencies, susceptibility to infections, and invasive procedures. The current approach to caring for these newborns involves the initiation of organ functions, however, their lungs and gut are not yet well-suited for such interventions.
The Perinatal Life Support (PLS) consortium aims to revolutionize the care provided to extremely premature infants. The primary goal is to create an environment for premature babies that closely resembles the protective and nurturing conditions of the maternal womb - an innovative system of care that recreates ex vivo innate fetal cardiorespiratory physiological conditions. At the heart of this groundbreaking system lies an artificial placenta, enabling the exchange of oxygen and nutrients crucial for the baby’s development. Continuous noninvasive monitoring of foetal parameters and the use of a foetal manikin for simulation purposes further enhance the capabilities and application of the system. To guide and optimize their efforts, the team harnesses the power of computational modeling, bringing together an interdisciplinary ensemble of academia and industry partners.
“Alan Flake, a scientist in Philadelphia managed to let a lamb survive in a liquid-based environment in which the lamb is not ventilated with air through the mouth and lungs, but oxygenation and nutrition is performed via the umbilical cord. The idea is that we could replace the incubator as it is used now with air-based ventilation, by an incubator that works on this
principle. That means you have to make an incubator in which the preterm is not breathing, so still in a liquid environment” explains Prof. van de Vosse. “So, what methods do we need to use to go from an idea to an actual properly working, safe, and accepted product? How can we go from science to engineering? In our vision, modeling the system - making mathematical, physiological, and experimental models is the key ingredient because we need a first-time-right solution. We have to verify the system in all aspects as much as possible without doing experiments in the hospital and without unneeded experimentation on animals. What we need is a digital twin of the entire perinatal life support system, including the perinate” he continues.
The liquid-based environment
The first step is to create a liquid-filled environment. Creating a liquid-based prototype is explored via two approaches. An (artificial) amniotic fluid-filled incubator involves creating an environment that resembles the fluid surrounding a developing fetus. The infant, represented by a manikin, will be placed in this incubator, providing a nurturing space
that supports lung development. The other approach is the non-submerged alternative. The manikin’s lungs are kept fluid-filled while the rest of the body remains outside the liquid. This method aims to create optimal conditions for lung development while reducing the complexity of the environment.
The fetal manikin and transfer devices
The fetal manikin used in the PLS project accurately replicates the physiology of extremely premature infants. The manikin is created using 3D-printed tissue- and organ-mimicking materials that are based on high-resolution MRI images of 24week gestational age fetuses. The manikin simulates cardiorespiratory physiology, specifically the heart, and vasculature. The manikin is equipped with sensors and actuators that produce signals that can be measured with the sensors.
The extracorporeal artificial placenta
The advanced artificial placenta should be able to support the vital functions of
extremely premature infants and provide effective oxygenation and waste removal.
A membrane oxygenator replaces the lung function of the placenta. The design considers the cardiorespiratory function and capacity of the fetus, ensuring compatibility with the liquid system and fetal manikin. A heat exchanger may be included to maintain body temperature.
Computational models and monitoring system
Predictive models are key in this project. The predictive models are able to predict the behavior of a 24-28 week old perinate, support medical decisions as it knows the perinate’s state and predict its behavior. The team also utilizes models that can predict the impact of the incubator. Gathering data about the perinate’s state is done through monitoring
Ensuring the bond between the mother and the baby is one of the challenges that the team plans to overcome. “One of our ideas is making sure there is interaction between the mother and the baby, either by allowing the mother to look at the baby or by using sound. The sound in the mother can be recorded and transferred to the PLS system so the baby can hear the same sounds it would hear if it were in the mother’s womb. We can also record the movement of the baby and transfer that to a band that is placed on the mother’s stomach. That would enable the mother to feel what is going on in the PLS system” explains van de Vosse.
PLS
Perinatal Life Support Project Objectives
The Perinatal Life Support (PLS) consortium aims to revolutionize care provided to extremely premature infants. An innovative system that closely replicates the protective conditions of the mother’s womb will improve the outcomes for the most vulnerable babies. Its components include a liquid-filled environment mimicking the amniotic chamber, an artificial placenta for oxygenation and waste removal, physiological fetal manikins, advanced computational models for predictive analysis, and sensory mechanisms for monitoring fetal parameters.
Project Funding
systems - sensory mechanisms that monitor fetal parameters. These include the creation of electrodes to measure fetal core temperatures, near-infrared spectroscopy (NIRS) and diffuse correlation spectroscopy (DCS) to assess oxygenation and flow in various tissues and digital cameras that measure signals like motion and skin color. Measurement of clinical parameters such as fetal heart rate, blood pressure, and oxygen saturation is used in order to validate the effectiveness of the developed sensory and feedback mechanisms. These parameters are fed into computation models. This validation process ensures that the system accurately captures and interprets the vital signs of the fetus, enabling precise monitoring and reliable decision support.
The PLS project is coordinated by the Eindhoven University of Technology in close collaboration with the Maxima Medical Center. The fetal manikins are made at the industrial engineering department at Eindhoven. The interdisciplinary consortium includes the RWTH Aachen University responsible for the development of the liquid-based environment and artificial placenta, the Politecnico di Milano and Nemo Healthcare who are involved in the monitoring systems and technical validation of the project, and the LifeTec Group which is also working on the development of the artificial placenta. The decision support system is developed at Eindhoven, together with Nemo Healthcare and Politecnico di Milano. This interdisciplinary consortium compromises world-leading specialists in obstetrics, neonatology, industrial design, mathematical modeling, ex vivo organ support, and noninvasive fetal monitoring. By combining their expertise, the PLS partners are paving the way for significant advancements in perinatal care.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 863087.
Project Partners
Together, the PLS partners provide joint medical, engineering, and mathematical expertise to develop and validate the Perinatal Life Support system using breakthrough simulation technologies. The interdisciplinary consortium will push the development of these technologies forward and combine them to establish the first ex vivo fetal maturation system for clinical use. This project, coordinated by the Eindhoven University of Technology brings together worldleading experts in obstetrics, neonatology, industrial design, mathematical modelling, ex vivo organ support, and non-invasive fetal monitoring. https://perinatallifesupport.eu/consortium-partners/
Contact Details
Prof.dr.ir. Frans N. van de Vosse
Department of Biomedical Engineering, Eindhoven University of Technology
Building Gemini-Zuid 4.131
T: +31 40 247 4218
E: f.n.v.d.vosse@tue.nl
W: https://perinatallifesupport.eu/project/
Frans van de Vosse is a professor of Cardiovascular Biomechanics at the Department of Biomedical Engineering at Eindhoven University of Technology. His research is related to computational and experimental biomechanical analysis of the cardiovascular system and its application to clinical diagnosis and intervention, cardiovascular prostheses, extra corporeal systems and medical devices. He is the author or co-author in more than 240 scientific publications.
“What we need is a digital twin of the entire perinatal life support system, including the perinate.”Frans van de Vosse Research prototypes made by the dept. of Industrial Design (Eindhoven University of Technology) being exhibited to the general public at Dutch Design Week 2022.
SHARE; a window into the ageing process
More and more of us are living for longer, raising important questions about how people can maintain a good quality of life into their old age. The SHARE infrastructure brings together health and socio-economic data on people over the age of 50 across 27 European countries and Israel, providing valuable insights into the ageing process, as Professor David Richter explains.
The European population is ageing rapidly, raising important questions about how to fund pensions, healthcare, and long-term care systems and help individuals maintain a good quality of life into their old age. The Survey of Health, Ageing and Retirement in Europe (SHARE) is a research infrastructure that provides rich data for valuable insights into the health and socioeconomic circumstances of people over 50 across Europe. The international coordination is based at the SHARE Berlin Institute, located in Berlin, Germany. “In SHARE, interviewers in 28 countries go to randomly selected households drawn from public databases, mostly from official person registers covering the population of interest. The SHARE data are based on full probability samples, providing internationally comparable and representative data,” outlines Professor David Richter, Director of the SHARE Infrastructure. The interviewers ask a standardised set of questions and gather data on individual health. “We have devices to measure grip strength or accelerometry and we do some wave-specific medical tests, such as asking people to get up off a chair then sit down again without using their arms for five times,” says Professor Richter. “We also have a battery of cognitive performance and functional performance measures, for instance a word recall question.”
SHARE consortium
The SHARE data provide a comprehensive picture of people’s physical, mental, and cognitive health, alongside information about their social and economic circumstances, which can also be used to guide and inform policy. Since its inception in 2004, interviews have been conducted with respondents every two years in nine regular waves of data collection. In 2020-21, two additional waves
about their income and expenditures. We have information on the number of generations living in a household, and there are considerable differences across Europe in this respect,” says Professor Richter.
The European population has changed significantly over recent years, with many countries seeing a large influx of refugees, such that it is also important to bring new people into the survey to reflect these wider
In SHARE, interviewers in 28 countries go to randomly selected households drawn from official person registers covering the population of interest. The SHARE data are unanimously based on full probability samples, providing internationally comparable and representative data.
focused on the consequences of the Covid pandemic. Close to 20 years in running, SHARE data allow researchers to study the changes in respondents’ life circumstances over time. “We try to interview the same respondents every wave and we want people to stay in the SHARE panel survey for as long as possible,” stresses Professor Richter. SHARE also includes data on people living in nursing homes. “We want to learn about the living situation of people in nursing homes. There are also questions about how many people live in a household,
demographic shifts. Alongside refreshing the sample, Professor Richter highlights that new questions may need to be added from time to time. The aim is to strike the right balance between maintaining stability in the questions, so as to provide a picture of development over time, while at the same time bringing in new questions to reflect emerging concerns such as climate change, which can have a significant impact on older people. “Many European countries have experienced increasingly intense heatwaves over the last few years, which are a
health risk to older people. It might be that we want to ask about their living situation, do they have air conditioning for example? We would need to think of new questions to cover such emerging challenges,” says Professor Richter.
The SHARE data help researchers to portray a fuller picture of the ageing process and uncover new insights into how people can maintain their independence for longer, reducing the pressure on countries’ healthand long-term care systems. SHARE data are available to researchers free of charge, and they have been used in almost 4,000 publications. Instructions for registering as a data user can be found on the SHARE website: https://shareeric.eu/data/become-a-user. “Researchers can study anything covered by the SHARE dataset and publish papers with it. Some researchers have looked at the connection between physical activity and cognitive function, while others have looked at the impact of the Covid pandemic on loneliness,” outlines Professor Richter. The second pillar of uses of SHARE data is to inform policy development. “Policy makers can use the data to inform their decisions. Policy making is about more than just data and numbers of course, but information is needed, for issues such as retirement savings, health and healthcare needs and older adults’ social networks to name but a few” continues Professor Richter.
Future of SHARE
The SHARE data has been used specifically for such questions, for example with researchers providing advice to the German and French governments on pension reform, which can be a highly contentious, divisive issue. While ultimately these are political choices, Professor Richter believes that evidence from the SHARE infrastructure can play an important role in informing and guiding the decisions of
policy makers. “With more people living for longer, and the working population shrinking in relative terms, decisions have to be made,” he says. The survey was initially funded for a 20-year term, due to end in 2024, but it will now be extended further, putting SHARE on a more secure long-term footing. In March 2011, SHARE became the first European Research Infrastructure Consortium (ERIC). This gave the SHARE data collection project a legal personality and capacity in all EU member states and participating associated countries. Moreover, SHARE-ERIC, headed by SHARE’s founder Professor Börsch-Supan and one of the few “European Landmark Research Infrastructures”, serves as the main funding channel for SHARE. “Since 1 January 2023, we have also established the new SHARE Berlin Institute, with additional funding from the German Ministry for Education and Research for the international coordination of the survey” continues Professor Richter. While some aspects of SHARE are highly centralised, data are collected by highly committed, locally based teams across the 28 countries. “There are lots of very competent and interesting people from different countries working in SHARE,” says Professor Richter. Their work has established SHARE as an invaluable resource for researchers, yet at the same time Professor Richter says there is a need to bring in fresh ideas to demonstrate the continued relevance of the infrastructure. While continuing to provide a stable and reliable panel database covering long-term socio-economic and health issues, there is a need to modernize the SHARE infrastructure to benefit from technological innovations in survey methodology. By meeting this challenge, we expect the SHARE infrastructure to maintain its position as one of the most valuable data sources for Europe’s policy makers and the scientific community.
SHARE
SHARE – Survey of Health, Ageing and Retirement in Europe
Objectives
SHARE, the Survey of Health, Ageing and Retirement in Europe, is a research infrastructure for studying the effects of health, social, economic, and environmental policies over the life-course of European citizens and beyond.
Funding
SHARE-ERIC is currently receiving funding from the European Union under grant agreements No 101102412 and the European Union’s Horizon 2020 research and innovation programme under grant agreements No 870628, No 101015924. Additional funding from the German Federal Ministry of Education and Research, the U.S. National Institute on Aging and from various national funding sources is gratefully acknowledged (see https://www.share-eric.eu/).
Contact Details
Prof. Dr. David Richter
Director SHARE Infrastructure SHARE BERLIN Institute GmbH, Chausseestrasse 111, 10115 Berlin
T: +49-30 994 042 721
E: drichter@share-berlin.eu
E: info@share-project.org
W: https://www.share-eric.eu/ : https://twitter.com/SHARE_MEA
Börsch-Supan,
International Journal of Epidemiology. DOI: 10.1093/ije/dyt088
David Richter
David Richter is the Director SHARE Infrastructure at the SHARE Berlin Institute and a Professor of Survey Research at the Free University Berlin. As psychologist, he is interested in the development of emotions, well-being, and life satisfaction across the adult lifespan and the influence of life events on the development of well-being.
A novel system for drug discovery
The majority of systems currently used to screen drugs against Alzheimer’s disease involve the use of neurons from patients or animal models in cellular assays. However, these systems don’t fully reflect the conditions in which the disease develops and progresses, as they capture mostly the molecular but not the electrophysiological phenotype of the disease. As the Principal Investigator of the EU-funded NEUREKA project, Dr Yiota Poirazi is working to develop a new system to model neurodegenerative diseases like Alzheimer’s, which could prove to be an invaluable tool in drug development. “We believe that if we can develop a more realistic representation of brain diseases, whereby both the physiological and molecular characteristics are captured, then it’s more likely to help in finding effective drugs,” she outlines. A lot of the drugs developed to treat Alzheimer’s fail during the regulatory process; the lack of a realistic environment for screening them has been identified as one potential factor in this, something that Dr Poirazi and her colleagues are working to address. “The NEUREKA project aims to change the drug screening process by introducing a hybrid system whereby cultured
neurons are driven by computational models of Alzheimer’s, so as to replicate the disease pathophysiology in vitro,” she says.
Alzheimer’s disease
In Alzheimer’s, calcium ions get into neurons - the main cells in the brain responsible for processing information - which causes neurotoxicity and eventually leads to cell death. “Dendrites, which are essentially the receiving ends of neurons, start to shrink and connections between neurons are lost. There is degradation of neurons in the brain, and eventually neuronal death, because of this toxicity and shrinkage. This neuronal loss leads to deficits in behaviour, memory loss and confusion,” explains Dr Poirazi.
In the project, Dr Poirazi is developing a computational circuit model of neurons, whose morphology and electrical properties are based on experimental data. “It’s a constrained, realistic, computer-based model consisting of a few hundred neurons,” she explains. “In terms of its biophysical properties, like its excitability or its synaptic properties, it is designed to reproduce what is seen in Alzheimer’s neurons.”
The computational model is then used to drive activity in cultured neurons, derived from human induced pluripotent stem cells (iPSC). Researchers are essentially recreating human neurons in cultures on a chip; initially these neurons are in a healthy state, but then they are exposed to certain chemical agents, leading to the development of the disease.
“The cultured neurons produce these amyloid beta proteins that accumulate in dendrites of Alzheimer’s patients. We start seeing synaptic deficits and dendritic atrophy, and slowly we are able to reproduce the disease in vitro. We will be reproducing it in a continuous manner, starting from the early phases, all the way through to the more severe, advanced stages,” outlines Dr Poirazi. The focus is on neurons
The pharmaceutical industry mainly uses cellular assays to assess drug effectiveness, yet this doesn’t provide information about the effects on neuronal communication, which is disrupted in neurodegenerative diseases. Researchers in the NEUREKA project are developing a novel drug screening system which could facilitate the drug discovery process for neurodegenerative diseases, as Dr Yiota Poirazi explains.CNRS-LAAS Clean Room. Figure showing the components of the NEUREKA project.
in those areas of the brain related to the human physiology of the disease, primarily the hippocampal neurons. “The hippocampus is the site with the most severe deficits in Alzheimer’s,” says Dr Poirazi. “We will also be using some cortical neurons, as the entorhinal cortex is where the disease is initiated.”
A novel Nanowire Chip
Cultured neurons are connected with the computational network model via nanowire electrodes that allow direct access to subcellular compartments of neurons, like their dendrites. Dr Poirazi says it’s possible to drive learning and memory activity in both the biological and simulated networks through this integrated system. Information can be presented to the computational network, which has various plasticity mechanisms, and it can learn to encode it. “By learning we mean that you can present just a piece of the original information, and the network responds as if you have presented the entire thing. So it completes the missing information. This is how learning is simulated in a network,” explains Dr Poirazi. The same approach can be taken with the cultured neurons, as they are inter-connected with the computational network. “When a memory is presented to the network it drives a particular activity, which is also then delivered to the cultured neurons,” continues Dr Poirazi. “This should drive changes in the cultured
are effective in inducing plasticity changes in these networks. We can also put drugs that help in inducing these changes, and try to achieve a reversal of the Alzheimer’s phenotype,” she outlines. “If a drug is able to reverse these deficits, we will be able to see that when we train the system so as to learn new information.”
Smart, multi-scale drug discovery
The NEUREKA project introduces a smart, multi-scale drug discovery system that Dr Poirazi says is very different from the cellular assays commonly used in the pharmaceutical industry. While conventional cellular assays enable researchers to see whether a drug leads to improvements at the molecular level, the NEUREKA system provides a significantly higher level of detail. “Our system provides information about deficits and restoration at the dendritic and the neuronal circuit levels. It’s a multi-scale system, as we can still zoom in and look at molecular changes,” explains Dr Poirazi. This system could be used not only in assessing the effectiveness of drugs, but also in guiding future development. “It could be used effectively as a prediction system for finding new drug targets,” says Dr Poirazi. “We can look at different properties – subcellular, cellular and circuit level effects of the disease – and then design drugs that target those different properties.”
NEUREKA
A hybrid neural-silico-computo device for drug discovery
Project Objectives
Researchers in the NEUREKA project are working to develop an innovative hybrid technology, combining nanoelectrodes and sophisticated computational models of neuronal circuits. This enables researchers to readout and manipulate activity/ connectivity in cultured neuronal networks of Alzheimer’s disease at subcellular accuracy, opening up new insights into the disease.
Project Funding
Alzheimer Europe’s database on research projects was developed as part of the 2020 Work Plan which received funding under an operating grant from the European Union’s Health Programme (2014–2020).
Grant agreement ID: 863245.
Project Partners
• Universita Degli Studi Di Padova
• Universita’ Degli Studi Di Milano-Bicocca
• Maxwell Biosystems Ag
• Centre National De La Recherche Scientifique Cnrs
• Idryma Technologias Kai Erevnas
Contact Details
Dr Yiota Poirazi, PhD
Group Leader / Director of Research, Institute of Molecular Biology and Biotechnology (IMBB) Foundation for Research and TechnologyHellas (FORTH)
Heraklion
Greece
T: +302810 391139
E: poirazi@imbb.forth.gr
W: http://neureka.gr
W: https://dendrites.gr
neurons as well, so when the network learns something, the cultured neurons should also be able to learn it. The activity in the cultured neurons is fed back to the computational model, in a closed-loop manner, thus allowing the biological and computational networks to learn in synchrony.”
Recent studies have shown that cultured neurons can learn to deal with simple problems, for example playing the video game Pong, now researchers in the project are using cultured neurons in a similar way. The idea here is to probe deeper into the impact of Alzheimer’s disease and assess the learning capacity of the cultured neurons. “In Alzheimer’s, memory is greatly impaired. We want to see what properties these cultured neurons lack. Can they still learn and pick up new information? To what extent can they do so?” says Dr Poirazi. Once the neurons have been characterised, Dr Poirazi aims to try and reverse any deficits. “This could involve delivering particular stimuli that we know
The primary focus in the project is Alzheimer’s disease, yet Dr Poirazi is clear that the framework holds wider potential and could also be applied to other neurodegenerative diseases. This would require a new computational model specific to the particular disease, as it is the model which essentially drives changes in the cultured neurons, an avenue which Dr Poirazi could explore in future. “If there is industrial interest in this research then we could look into extending the technology to other neurodegenerative diseases,” she outlines. The project is still in its early stages however and there aren’t any clear exploitation plans as of yet, with the current priority more to demonstrate that the system as a whole works efficiently. “The goal is to demonstrate the feasibility of this technology as a proof-of-concept, which we hope will happen in the next few months,” continues Dr Poirazi. “We are not aiming to find new drugs, but to offer a technology that can be used for drug screening.”
Dr Yiota Piorazi is a Research Director at IMBB-FORTH, where she leads a laboratory investigating dendrites. The main focus of the group’s research is how the integrative properties of dendrites contribute to learning and memory functions.
We believe that if we can develop a more realistic representation of brain diseases then it’s more likely to help in finding good, effective drugs.Dr Yiota Piorazi, PhD
Laser focus on neurodegenerative disease
Millions of people across Europe live with the effects of neurodegenerative diseases like Alzheimer’s and Huntington’s, and the numbers are set to rise further in the coming years. We spoke to Professor Edik Rafailov about the work of the NEUROPA project in developing a new method of treating neurodegenerative disease, based on the emerging field of phytoptogenetics.
There is currently no cure for neurodegenerative diseases like Alzheimer’s and Huntington’s disease, so treatment is typically focused on managing and mitigating the symptoms. Now researchers in the EUfunded NEUROPA project are exploring a potential new approach to treating these diseases, based on ideas from the field of optogenetics. “The main ultimate aim in the project is to treat diseases like Alzheimer’s and Parkinson’s in a non-invasive way,” says Edik Rafailov, coordinator of NEUROPA project and Professor in the Aston Institute of Photonics Technology at Aston University. Globally, researchers are exploring the potential of optogenetics techniques, involving the use of light to effectively control cells, as a means to stimulate certain parts of the brain. “A type of protein called opsins are known to detect light. These opsins can be placed in biological tissue and illuminated, then it’s possible to activate or inhibit certain cells,” explains Professor Rafailov.
Laser source
The problem in terms of treating neurodegenerative diseases is that while these opsins can be activated by visible light, this wavelength cannot be transmitted through the skull. Professor Rafailov has taken
up the challenge of developing a compact ultra-short pulse laser source in near-IR wavelength range to non-linearly activate phytochromes, a type of photoreceptor that is found in plants, opening up a new field of research called phytoptogenetics. This wavelength range enables researchers to perform this activation non-invasively through the skull. “We are trying to activate phytochrome switches in a nonlinear way. We can activate these phytochromes with
This delivery of ultra-short pulses can be used to activate phytochromes using nonlinear phenomenon located close to the surface of the brain. The phytochromes are also delivered into the brain non-invasively via AAV (Adeno Associated Virus) viruses which are administered intra-nasally. “Our partners in the project have developed these AAV viruses using directed evolution methods, and phytochromes will be placed on them. Through this approach
an infrared ultra-short, femtosecond (10 -15 of a second) pulse laser,” he outlines. The idea is to use this approach to stimulate cortical areas associated with neurodegenerative disease in a mouse model, and ultimately restore normal function. “Light in the nearinfrared wavelengths can propagate deeper into tissue, and it goes through the skull,” says Professor Rafailov. “We will use an ultra-short pulse, which gives us very high peak power.”
phytochromes will be spread around the cortex of the brain,” explains Professor Rafailov. The next step then is to precisely illuminate these phytochromes and control their activity; this is still an emerging technology, and Professor Rafailov says the priority at this stage is to demonstrate feasibility. “The main task at the moment is to show that this idea works and to demonstrate its potential, then maybe it can be taken on further in a future project,”
A type of protein called opsins are known to detect light. These opsins can be placed in biological tissue and illuminated, then it’s possible to activate or inhibit certain cells.The project concept demonstrator.
he says. “This research will not deliver immediate benefits. It’s a new idea, a new concept, and we need to demonstrate feasibility.”
A significant degree of progress has been made in this respect. Researchers have demonstrated that phytochromes can be activated nonlinearly with the laser source developed in the project, while the AAV viruses have been developed. “We can place the phytochromes onto the viruses, which can be delivered non-invasively to the mouse brain,” says Professor Rafailov. The next stage will be to develop the laser in a portable configuration, then it can be transferred to the project partners who are working with mice. “Our partners are trying to understand how to deliver phytochromes efficiently and are looking at how the mice react. We will be aiming to show that their brains react in the way that we want,” continues Professor Rafailov. “We are working with Huntington’s disease as a model, as our partners have deep expertise in this area.”
A further aspect of the project’s research involves developing a multimodal hemodynamic brain monitoring and imaging system to study the effects of stimulating and modulating the brain in real-time, enabling researchers to monitor what happens when the phytochromes are activated. This would represent an attractive alternative to current methods of monitoring treatment effectiveness, believes Professor Rafailov. “At the moment MRI is commonly used for brain monitoring. It’s a very good technology, but it’s extremely expensive and it’s difficult to get time to use it,” he explains. “If we can manage to develop this photonics-based new technology to monitor
what’s happening in the brain, then that will be extremely useful.”
Looking to the future
The project is primarily focused on Azheimer’s and Huntington’s disease at this stage, although there is potential for further development in future and to broaden out the scope of the research. This could include addressing a wider range of neuro-degenerative conditions, as well as modifying the laser source so it can penetrate deeper into tissue. “The laser source developed in NEUROPA can only be used to treat diseases which affect the cortex, as we cannot deliver light very deep into the brain. We have certain ideas about how we can penetrate deeper into tissue, but this will require a new project,” outlines Professor Rafailov.
Researchers are exploring the possibility of a successor project, building on the progress made in NEUROPA. “We have submitted some project proposals on how light can be used to treat different diseases,” says Professor Rafailov. “My own background is in laser physics, and we are learning all the time about potential applications in human biology.”
The non-invasive treatment of neurodegenerative disease is one possibility, with a lot of attention currently focused on developing new, more effective treatments to cope with rising demand. With European society aging, and the incidence of neurodegenerative disease set to rise further in the coming years, research into new treatment methods is widely recognised as a major priority. “We are developing a new approach to treat neurodegenerative disease,” says Professor Rafailov.
NEUROPA
A new era for brain therapy
Project Objectives
NEUROPA integrates cutting edge technology in lasers, phytochromes, optogenetics, viral delivery and diffusion wave spectroscopy to treat neurodegenerative diseases.
Project Funding
The NEUROPA Project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 863214.
Project Partners
• Please see website for partner details: https://www.neuropaproject.com
Contact Details
Project Coordinator,
Prof. E. U. Rafailov
Optoelectronics and Biomedical Photonics Group
Aston Institute of Photonics Technologies College of Engineering and Physical Sciences
Aston University
Birmingham, UK
B4 7ET
T: +44 121 204 3718
E: e.rafailov@aston.ac.uk
W: https://www.neuropaproject.com
W: http://rafailov.org/
Edik Rafailov is a Professor at the Aston Institute of Photonics Technology, part of Aston University. He is aa renowned authority in the field of ultrafast lasers and biophotonics, with a publication record comprising more than 500 articles in esteemed refereed journals and conference proceedings.
Understanding the collective behaviour of guineafowl
Vulturine guineafowl live together in large groups, and these groups form part of a large and complex society. Researchers in the ECOLBEH project are looking at how these animals deal with the challenges of living in such a society, and how doing so can help them survive the harsh Kenyan climate, as Prof Dr
DamienFarine explains.
The vulturine guineafowl is a highly social species of largely land-based bird endemic to East Africa, where they live together in groups of between 15-60 individuals. While the main advantage of this behaviour is that it provides a degree of protection from predators, there are also other benefits. “The birds are likely to benefit in terms of finding food and water, navigating the landscape, and so on,” explains Prof Dr Damien Farine, Scientist at the Max Planck Institute of Animal Behaviour, Eccellenza Professor at the University of Zurich, and Associate Professor at the Australian National University.
Making decisions as a group
As the Principal Investigator of the ECOLBEH project, Prof Dr Farine is studying the ecology of the collective behaviour of vulturine guineafowl. This involves looking at how these animals function as a group, for example how they decide where to go when group members have
different preferences. These birds live together in very stable groups, so they somehow must reconcile their differences and come to an agreement about what to do next. “What we found is that they use very simple rules, akin to voting: any group member can initiate movement, and the group follows the direction of the majority,” says Prof Dr Farine.
His study, based at the Mpala Research Centre in Kenya, relies primarily on using GPS tags to collect precise information on where each group – or in some cases each member of a group – has travelled over time. By putting GPS tags on all individuals from the same group, the research team can collect a wealth of information on where each group member is, as well as how they move relative to each other. “From this information, we could see that the group make democratic decisions –they essentially vote on where to go next,” says Prof Dr Farine. “However, not all individuals have equal influence – most of the time (but not always!) it is the males that decide.”
An unusual multi-level social structure
As the first project ever conducted on this striking species, and one of the very few behavioural studies ever conducted on any guineafowl, Prof Dr Farine was also excited to gain some new insights into their social systems. Almost immediately, the team made a startling discovery. “We found that groups sometimes come together with other groups to form super groups. This is quite unusual among birds and it has some similarities to primates and other large mammals that live in so-called multi-level societies,” outlines Prof Dr Farine.
This unexpected social structure for birds has been uncovered thanks to Prof Dr Farine’s work in tracking the movement of several groups of guineafowl. “Every group in the population we’re looking at has between 2-6 GPS-tagged birds, and every member is marked with a unique combination of colours on its legs. This allows us to identify all the
members, and we can say where that group has been from the GPS tags,” he outlines.
By combining the large-scale deployment of GPS tags across all of the groups in the population with repeated observations of individually-marked birds within each of the groups, Prof Dr Farine and his colleagues have been able to gain fresh insights into the behaviour of vulturine guineafowl. “We’ve been able to show how each group moved and interacted with other groups, and how the membership of each group has been very stable over several years,” he says. “Groups also contain multiple males and multiple females, all of which can breed if the conditions are right, which again is unusual among birds.”
The project’s research has also yielded new information about other aspects of these birds’ behaviour. Researchers have found that young males in a group will help their mothers with raising the next generation of offspring, and that there are strict dominance hierarchies. “The males in particular have a clear alpha, beta, and so on. When interacting with each other, individuals are very strategic, investing only in fighting with their closest competitors for rank,” says Prof Dr Farine.
These patterns in social structure within groups and across the population raise
evolved in the same environment. “It also gives us an opportunity to gain insights into how such animal groups – which includes many primates – solve some of the challenges associated with living in large, stable groups,” continues Prof Dr Farine.
away from Mpala together to go in search of resources,” continues Dr Farine. “An exciting question we’re trying to answer next is to determine how groups know where to go.”
One hypothesis is that the females know where to go, as when they come into a group
Dealing with drought
Recently, Prof Dr Farine’s study area has experienced one the most severe drought in half a century. This dramatic change in conditions has allowed researchers to understand how living in a group and forming a multi-level society helps these birds deal with the environmental challenges of living in a semi-arid landscape. The droughts have forced groups to leave their regular home ranges at Mpala in search of food, yet Prof Dr Farine says it can be difficult for the birds to know where to look. “Because group members stay together all of the time, the information that they have about their landscape is limited,” he explains.
they also bring their prior knowledge, which may include an awareness of food sources. However, Dr Farine says that testing this hypothesis is quite challenging. “We don’t know where the females have come from before they enter our population,” he explains. “In future we hope to increase the scale of our tracking to multiple populations to also capture the movement of female guineafowl as they move from group to group.”
Female dispersal
Another topic of interest is looking at how females decide on where to go when they leave their groups in search of new ones, and how they
Vulturine guineafowl live in a multilevel society. These are societies in which individuals form groups (coloured circles), and those groups then interact with other groups — with preferences for specific groups as shown by the connecting lines in this group-to-group network. Prior to the work on vulturine guineafowl, such societies were thought to exist only in mammals, including humans, primates, and giraffes.
We’ve been able to show how each group of vulturine guineafowl moved and interacted with other groups, and how the membership of each has been very stable over several years. Groups also contain multiple males and multiple females, all of which can breed if the conditions are right
new landscapes. “One female recently completed a 48 km round trip, to the northern border of Laikipia County, in search of a group to join. These movements, which are typically done alone, have revealed some interesting insights on dispersal ecology,” outlines Prof Dr Farine. “First, we’ve found that the females change HOW they move, allowing them to move very large distances with almost no increase in the energetic cost of displacement. They achieve this by a combination of moving straighter and faster, which is more energy-efficient.”
The team found that changes in how the birds move mean that they can travel up to 33.8% further in a day of dispersal with only an approximately 4.1% in the total daily cost of movement. It can however be very challenging for these females to find new groups to enter. “It seems that they are not always welcome,” says Prof Dr Farine. “As a result, we often see females who have made long journeys away from Mpala turning around and heading straight back to their natal group. They often end up staying there for six months until the next window of opportunity to disperse –typically a rainy season – presents itself. As a result, we’ve observed some females only successfully dispersing at more than two years old, which is very late for any bird species.”
Making large movements as a group
Despite having the ability to fly, vulturine guineafowl almost exclusively move by walking on the ground. “ They can travel for 50 kilometres or so without flying. the furthest they fly is the 10 to 20 m that is required to cross a river. Otherwise they walk the whole way,” continues Prof Dr Farine. As part of his
work in the ECOLBEH project, Prof Dr Farine has studied the energy efficiency of groups making these big movements, which has yielded some interesting findings. “When we compare the long-distance movements that groups make with those made by individuals during dispersal, we find that groups gain less efficiency benefits than dispersing individuals,” he outlines. Moving as a group is less efficient than moving alone. “This is likely because groups constantly face conflicting preferences about where to move, causing them to frequently slow down or even stop as they resolve these,” concludes Prof Dr Farine.
Moving forward with the project
The project’s primary focus is the vulturine guineafowl, yet this research also holds relevance to our understanding of social structures and collective behaviour in other species. These birds don’t conform to established ideas about how birds are thought to behave, one point which stimulated Prof Dr Farine’s interest. “These guineafowl have quite similar structures within their groups to those found in baboon societies living in arid environments,” he explains. Certain aspects of collective behaviour have long been thought of as being unique to primates, but now the
question arises of whether other organisms are capable of expressing similar types of behaviours, a topic Prof Dr Farine hopes to investigate further. “We’d love to continue working on this, subject to continued funding and the goodwill of our Kenyan hosts,” he says. “A lot of what we’ve learnt about social evolution, social biology and human evolution has been gained by studying species like baboons, and the vulturine guineafowl is a very interesting and complementary species in this respect.”
While the drought has made it difficult to tackle some of the questions that researchers had initially planned to look at around collective decision-making, cooperative breeding and changes to group structure, Prof Dr Farine says it has also opened up new avenues of investigation. One centres around the movement patterns of the guineafowl. “For example, we’ve found that vulturine guineafowls have started making nomadic movements, and these movements are likely to capture some really fine-scale information about the state of the environment.” he outlines. The guineafowl could effectively be used as environmental sensors, providing important information for land managers and conservationists. “The guineafowl can effectively tell us on a dayby-day basis what conditions are like on the ground,” continues Prof Dr Farine. “We are only studying them in a very small area, but if you could tag birds over a much larger area, you could potentially sense this information over thousands of square kilometres.”
Researchers in the project are helping build a deeper picture of conditions on the ground, which can then inform decision-making in areas like conservation, land management and planning. Alongside building a stronger evidence base, Prof Dr Farine also hopes to fill in some gaps in our understanding of social evolution, and the capacity of different species to evolve certain types of social behaviours.
“When we consider social behaviour in birds, we quickly find that researchers have almost exclusively studied birds only when they are breeding. This is distinct from primatologists, who follow primate groups all year round, and investigate other aspects of their social behaviour,” he says. The aim is to bridge these two disciplines and bodies of knowledge. “We’re taking more of a primatology-like approach to studying these groups of guineafowl, rather than focusing solely on their social behaviour during the breeding season,” explains Prof Dr Farine. “This is helping us uncover new insights – our research is also causing us to rethink many of the assumptions in our field.”
The project is now over halfway through its overall funding term, with Prof Dr Farine exploring different options to extend this research. One possibility is in collaborating with other research projects working in related areas. “We hope to work with other projects that work on plant communities and other species, like small mammals, to gain a more integrated insight into what happens when drought hits these types of eco-systems,” he says. “This is something that is very special about working at the Mpala Research Center, which is a gathering place for leading ecologists to do their research.”
ECOLBEH
The Ecology of Collective Behaviour Project Objectives
The project aims to understand the fundamental building blocks of animal societies. This includes
• how and why groups form
• how groups are maintained (e.g. how groups make consensus decisions to remain cohesive)
• how animals navigate their social landscape (e.g. how dispersing individuals integrate into a new group)
• how multilevel societies allow animals to overcome harsh conditions and adapt to novel climates.
Project Funding
This Project has received funding from the European Union Horizon 2020 research and innovation programme under grant agreement No 850859.
Laboratory Members
https://sites.google.com/site/ drfarine/lab-members
Contact Details
Project Coordinator,
Prof. Dr. Damien Farine Eccellenza Professor
Department of Evolutionary Biology and Environmental Studies
University of Zurich
Associate Professor
Division of Ecology and Evolution Research School of Biology
Australian National University
Affiliated Scientist, Department of Collective Behavior, Max Planck Institute of Animal Behavior
E: damien.farine@ieu.uzh.ch
E: damien.farine@anu.edu.au
W: https://sites.google.com/site/drfarine/home
Prof. Dr. Damien Farine is currently an Eccellenza Professor at the University of Zurich, an Associate Professor at the Australian National University, and an Affiliated Scientist at the Max Planck Institute of Animal Behavior. He was previously a Principal Investigator at the Max Planck Institute of Animal Behavior, a Principal Investigator at the Centre for the Advanced Study of Collective Behaviour, and a Lecturer at the University of Konstanz. Prior to this, he was a Postdoctoral Researcher at the University of Oxford and a Smithsonian Tropical Research Institute fellow based at the University of California Davis. He studied his PhD at the Edward Grey Institute of Field Ornithology at the University of Oxford.
In 2018, he was awarded the Christopher Barnhard award for Outstanding Contributions by a New Investigator by the Association for the Study of Animal Behaviour. In 2019, he was awarded an ERC Starting Grant on The Ecology of Collective Behaviour. He has also been included in the 2019, 2020, and 2021 ISI Highly Cited Researchers lists.
Prof. Dr. Damien FarineVulturine guineafowl are highly vocal, and their calls can be heard a long way. This could allow groups to communicate with— and track the location of—other groups in the landscape.
Sensory perception of copepods
A type of extremely small crustacean, copepods are highly abundant in both saltwater and freshwater environments and are found in a variety of different habitats, from surface waters right down to the ocean floor. A wide variability in swimming behaviour has also been observed among copepods, with some species that essentially drift or are carried on ocean currents, while others have very good swimming ability. “Some copepods are very good swimmers for their size. The bottom of their body is a bit like a shrimp. They have six legs, which they use effectively as oars,” outlines Christophe Eloy, Professor of Fluid Mechanics at Centrale Marseille. As the Principal Investigator of the C0PEP0D project, Professor Eloy is investigating the behaviour of these strong-swimming crustaceans, looking to understand how they sense the surrounding environment with only very limited visual perception abilities. “We want to understand their behaviour, so we want to understand how they use the sensations or signals they acquire and how they behave,” he explains.
Copepod behaviour
These copepods have only a very primitive eye, so rely on other sources of information to sense their environment. Copepods have very long antennae covered with sensors, which they use to measure the flow of water relative to their bodies. “This is how they reconstruct their surroundings,” says Professor Eloy. These sensors enable copepods to detect when a predator or their prey is nearby for example, as well as to pinpoint their location in space, yet the underlying mechanisms behind this are not clear. “We have very little idea how the copepods do this,” acknowledges Professor Eloy.
This topic lies at the heart of the project’s work, with researchers developing hypotheses on what optimal or ideal behaviour would look like for copepods. These hypotheses can then be tested in a virtual environment closely resembling the conditions that copepods face in marine environments. “We are trying to infer what the copepods should do, if they wanted to
function in something close to an optimal way, then we look to assess whether this hypothesis is correct. We try to simulate the information or signals that copepods may encounter, then we essentially listen and learn. We try to simulate what a certain behaviour would achieve,” says Professor Eloy. “For example, I recently co-authored a paper titled Surfing on turbulence, which explored whether using hydrodynamical signals would help copepods to increase their vertical swimming speed in turbulent waters. We assumed copepods can measure the gradients of the flow, and then we investigated whether copepods can swim more efficiently by choosing their swimming direction based on this information.”
The project’s agenda includes research into how copepods effectively process this information. Copepods are highly resilient organisms, evolving to become a ubiquitous feature of marine environments, now researchers are looking at the reasons behind this success. “Some members of my team are developing algorithms to basically find behaviours. We use reinforcement learning, a branch of machine learning, to train a neural network and find the optimal or ‘good’ behaviour,” outlines Professor Eloy. The project is still at a relatively early stage, with the team conducting experiments on real copepods to verify that their hypotheses match reality, while Professor Eloy also hopes to broaden the scope of the research in future.
Researchers showed that the behaviour they had hypothesised would prove to be very efficient, with copepods’ ability to adapt to the extent of the flow helping them reach average mean velocities which were much higher than would otherwise have been possible. Another aspect of Professor Eloy’s research centres around investigating how males follow the pheromone trails left by females. “There is evidence that males change behaviour and swim faster when they encounter a path that was previously followed by a female. It is probable that this is based on chemical sensing, so they sense this pheromone,” he outlines. This sensing may be disrupted in turbulent waters, another topic of interest in the project. “The natural habitat of Copepods is not completely quiet, and a pheromone trail will be stirred and stretched by turbulent flow,” points out Professor Eloy. “Does this chemical sensing approach still work in turbulent flow? If it does, can the copepods exploit the fact that they can measure both the chemical concentration and the flow?”
“We hope to see whether this approach could be extended to other planktonic organisms, while it would also be very interesting to look at predator-prey relationships,” he says. “We sometimes see that prey adapt when a predator evolves a certain strategy, so there’s a kind of evolutionary arms race going on.”
Predator-prey relationships
A prime example is the way that certain predators approach copepods in the water. Some species suck in water with their mouths as they move through the water, counteracting the flow their bodies generate, which makes them more difficult to detect.
“That makes them almost invisible to what is in front of them, in terms of the flow generated, while prey have also evolved to make themselves less visible to predators,” says Professor Eloy. The project’s agenda crosses several disciplinary boundaries, and while Professor Eloy’s background is in fluid mechanics, he is collaborating with other projects and researchers in complementary
Copepods are ubiquitous in both saltwater and freshwater environments, and many species migrate up and down the water column at different times of day to gather food and avoid predators. We spoke to Professor Christophe Eloy about his research into how these copepods use the information they acquire and how this influences their behaviour.
We are trying to infer what the Copepods should do, if they wanted to function in an optimal way, then we look to assess whether this hypothesis is correct. We try to simulate the information or signals that the Copepods may encounter, then we essentially listen and learn.
areas. “We are collaborating with biologists who are providing us with the copepods to do the experiments, as well as the algae they eat,” he continues. “We wanted to understand the differences between copepod species, and the variations of behaviour that could be observed.”
The wider aim in the project is to essentially reverse-engineer the algorithms by which copepods process the information available to them about their surroundings, which could then inform the ongoing development of machine learning. The project’s inter-
disciplinary nature, bringing together biology, fluid mechanics and artificial intelligence, is a distinguishing feature. “Marine biologists observe these organisms, while my field is fluid mechanics. My focus is on trying to understand the flow and the signals,” explains Professor Eloy. “Our research also holds relevance to machine learning. People trying to build robots for detecting different things without vision are basically trying to solve the same problem that we are addressing. Our project very much lies at the crossroads between these different topics.”
C0PEP0D
Life and death of a virtual copepod in turbulence
Project Objectives
The objective of C0PEP0D is to decipher how copepods exploit hydrodynamic and chemical sensing to track targets in turbulent flows. We address three questions:
• Q1: Mating. How do male copepods follow the pheromone trail left by females?
• Q2: Finding. How do copepods use hydrodynamic signals to “see”?
• Q3: Feeding. What are the best feeding strategies in turbulent flow?
We hypothesize that reinforcement learning can help reverse-engineer the algorithms used by copepods. To test this hypothesis, we are building a virtual environment, where copepods are trained: virtual copepods sense flow velocity and chemical concentration and this sensing information is processed by a neural network trained by reinforcement learning. This theoretical and numerical approach is complemented by experiments on real copepods with the goal of measuring how copepods reacts in turbulent flow.
Project Funding
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 834238
Project Partners
https://c0pep0d.github.io/team/
Contact Details
Project Coordinator, Professor Christophe Eloy
Christophe Eloy is a Professor of Fluid Mechanics at Centrale Méditerranée. His research interests span several fundamental problems in solid and fluid mechanics, including fluid-structure interactions, hydrodynamic instabilities and animal locomotion.
Probing the role of parasitic fungi
Parasitic fungi are known to infect phytoplankton cells in the world’s oceans, yet their wider role and impact on global biogeochemical cycles is unclear. We spoke to Isabell Klawonn, Ph.D. about her research into the function of these fungi and how they affect interactions between phytoplankton and bacteria.
The role of marine fungi in carbon and nutrient cycling has not historically attracted much attention in research, partly due to the difficulty of identifying them under a microscope. Over the last decade or so a lot of effort has been devoted to sequencing environmental communities, indicating a widespread distribution of fungal microparasites, now Dr Isabell Klawonn and her team at the Leibnitz Institute for Baltic Sea Research (IOW) are looking to probe deeper into the impact of these fungi on global biogeochemical cycles. “On the one hand we want to look at how they drive, change or modulate interactions between phytoplankton and bacteria, while we also want to look at how they affect the carbon fixation activity of those phytoplankton, and then at the fate of the fixed carbon,” she outlines. This research is centered on a type of fungi called Chytridiomycota – or chytrids – which infect and parasitise certain kinds of plankton, with Dr Klawonn looking mostly at diatoms and cyanobacteria. “These are both types of microalgae,” she says. “Diatoms are very important when looking at the global scale, as they frequently form blooms in the global ocean. Cyanobacteria – also called blue-green algae – can be toxic and get infected by a parasite.”
This process begins when the parasite finds a suitable host and is encysted, meaning that it essentially attaches itself to the host. The fungal microparasite then penetrates the host and effectively reaches out with rhizoids, which can be thought of as part of its digestive system, and transports the nutrients from the diatom cell to its outer structure, the sporangium. “This structure grows and grows over time. Eventually it opens up and new fungal zoospores are released, and then they try to find a new host,” explains Dr Klawonn. The aim now for Dr Klawonn and her colleagues in her Emmy Noether-funded research group at the IOW is to build a fuller picture of the role of these microparasites, work in which they are using a variety of experimental approaches. “We try to go from single cells to entire populations. A population would be for example a community with only one species, and then we can go to mixed communities
which include several species,” she outlines. “We want to find out what’s going on at the single cell level, and then we are trying to extrapolate from that to a wider community.”
experiments on cultures in the lab. In these experiments, researchers compare a host culture that is infected with a fungal parasite to one that isn’t, and look at certain key
Plankton communities
Researchers in Dr Klawonn’s group are both working with natural plankton communities and also isolating cells and conducting
parameters. “One crucial consideration is how different populations develop in terms of cell abundance, in terms of how many cells get infected. We also look at chlorophyll content, which is usually a sign of photosynthetic activity, then we can use more sophisticated methods to probe deeper into other aspects,” says Dr Klawonn. One major aspect of interest is the so-called fungal shunt, the process by which carbon is transferred from a host cell to the fungal parasitic cell. “In one publication we quantified how much carbon is transferred from one phytoplankton cell –the diatom Asterionella formosa – to the outer
We want to look at how fungal microparasites drive, change or modulate interactions between phytoplankton and bacteria, while we also want to look at how they affect the carbon fixation activity of those phytoplankton.Sampling of plankton communities on-board the research vessel R/V Sonne. Credit: N. Choisnard. Isolates of model pathosystems growing under controlled laboratory conditions.
structure of the fungus, which is called the sporangium,” continues Dr Klawonn. “At the moment we are able to trace carbon shunted from phytoplankton to the sporangia, and to the zoospores that are released later on.”
A further dimension of the group’s research involves looking at the effects of fungal parasitism on the fate of sinking organic matter in the coastal ocean, in particular aggregates formed when phytoplankton cells stick together or to other material. Recent findings from lab-based research suggest that carbon fixed by the phytoplankton is remineralised faster in surface waters, rather than settling down on the seabed. “This is because it’s transferred to the fungus, then the fungus releases these zoospores, and they stay in the surface water, ” explains Dr Klawonn. We have also found that the likelihood of phytoplankton sticking to each other and forming aggregates can depend on whether they have been infected by a parasite. “These phytoplankton cells usually produce a carbon-rich mucus on their exterior, which is kind of sticky. If two cells are very adhesive they would stick together and then form these aggregates,” says Dr Klawonn.
Phytoplankton-bacteria interactions
This is a topic Dr Klawonn hopes to investigate in more detail in future, alongside continued research into the interactions between phytoplankton and bacteria and how those interactions change during chytrid epidemics.
The group’s research could also hold important implications in terms of detecting chytrids, a topic that Dr Klawonn says has been relatively neglected. “The countries around the Baltic Sea have conducted extensive research over the last few decades. It’s one of the bestmonitored areas worldwide, yet still, these chytrids haven’t attracted much attention,” she explains. While a lot of effort is devoted to monitoring and analysing the water around the Baltic Sea, it remains difficult to recognise when phytoplankton are infected with these microparasites, an issue that Dr Klawonn is working to address. “We have suggested a method of double-staining the chitin cell wall within chytrids, which we think helps people to recognise infections,” she outlines. “It’s a very simple method. We provide advice on how to best use the protocol and what to consider when a staining is positive.”
An effective long-term chytrid monitoring regime would require high-throughput methodologies, yet the focus for Dr Klawonn is more on in-depth research into chytrids and their function. This research is very much ongoing, with Dr Klawonn planning to explore different avenues of investigation to build a deeper picture of fungal parasitism and its wider importance. “For example, in future we will probe deeper into the molecular biology of these fungal microparasites,” she says.
FunPhy
Fungal infections on phytoplankton — cryptic perturbation of phytoplankton growth, recycling and sedimentation
Project Objectives
The Emmy Noether Junior Research Group will elucidate the functional and quantitative role of parasitic fungi on phytoplankton blooms and element cycling in brackish and marine waters. Our comprehensive approach will include experimental work on phytoplankton–fungi co-cultures as well as field-sampled plankton communities, spanning from single-cell up to mesoscale-scale flux measurements across the water column.
Project Funding
This project is funded by the DFG - Deutsche Forschungsgemeinschaft.
Project Partners
• Silke Van den Wyngaert, University of Turku, Finland
• Luca Zoccarato, University of Natural Resources and Life Sciences, Vienna, Austria
• Maliheh Mehrshad, Swedish University of Agricultural Sciences, Uppsala, Sweden
• Martin Whitehouse, Swedish Museum of Natural History, Stockholm, Sweden
Contact Details
Project Coordinator, Isabell Klawonn, Ph.D Junior Research Group Leader, Emmy Noether Program (German Research Foundation) Microbial Plankton and Biogeochemistry
Leibniz Institute for Baltic Sea Research, Warnemuende (IOW), Office 404
T: +49 381 5197-0
E: isabell.klawonn@io-warnemuende.de W: https://www.io-warnemuende.de/ isabell-klawonn-en.html
Klawonn, I., S. Van den Wyngaert, A. E. Parada, N. ArandiaGorostidi, M. J. Whitehouse, H.-P. Grossart and A. E. Dekas (2021). Characterizing the “fungal shunt”: Parasitic fungi on diatoms affect carbon flow and bacterial communities in aquatic microbial food webs. Proc. Nat. Acad. Sci. U.S.A. 118: e2102225118, doi: 10.1073/pnas.2102225118
Klawonn, I, Van den Wyngaert, S, Iversen, M H, Walles, T J W, Flintrop, C M, Cisternas-Novoa, C, Nejstgaard, J C, Kagami, M, Grossart, HP (2023) Fungal parasitism on diatoms alters formation and bio–physical properties of sinking aggregates. Commun. Biol. 6, 206, doi.org/10.1038/s42003-023-04453-6
Isabell Klawonn is a
Junior Research Group Leader at the Leibniz Institute for Baltic Sea Research. She gained her PhD from Stockholm University and was a postdoctoral scholar at Stanford University in the US before taking up her current role.
Secrets of the Hidden Oceans
In our solar system, in addition to the eight worlds and five dwarf planets, NASA counts 290 moons, and 460 natural satellites orbiting everything from dwarf planets to asteroids. This leaves room for much research with an abundance of mysteries in our own solar system still to unravel. The results of measurements sent back from exploratory spacecraft have yielded remarkable and surprising discoveries that have shaken up science. In recent decades, we have begun to put together strong evidence that under the surface of many moons, reachable by spacecraft, are vast hidden ocean expanses and that it might be possible that in at least one of them, or even several of them, some level of life will reside. Scientists are now determined to find more answers with exploratory missions and rigorous analysis.
The icy orb of Europa, one of Jupiter’s more prominent moons from the 95 in orbit, was the first to reveal evidence of a sub-sea environment. The Voyager (launched in 1977) and Galileo (launched in 1889) space probes flew by the moon, making scientists aware of the possibility there was a subsurface ocean. Galileo’s measurements indicated that an electrically conductive fluid, salt water, was in abundance beneath the surface of this moon, causing magnetic fluctuations.
More recent studies indicate the ocean has a huge influence over Europa, as the mass of water pushes the ice shell along, possibly speeding up and slowing down the rotation of the icy shell of the moon over time.
In 2020, NASA scientists revealed further research that supported the idea that Europa’s ocean could harbour life – siting that it was similar to Earth’s oceans. At NASA’s Jet Propulsion Laboratory in California, a research team modelled geochemical reservoirs within the interior of Europa using data from the Galileo mission.
By Richard ForsythMohit Melwani Daswani, who was the lead researcher said: “…Our simulations, coupled with data from the Hubble Space Telescope, showing chloride on Europa’s surface, suggests that the water most likely became chloride rich. In other words, its composition became more like oceans on Earth. We believe that this ocean could be quite habitable for life… Europa is one of our best chances of finding life in our solar system.”
Europa is not a one-off in the Jovian system, it’s now assumed that Jupiter has three icy moons that potentially contain substantial underground oceans of liquid water – the other two being, Ganymede (a moon that is bigger than planet Mercury) and Callisto (although this is now in contention).
Saturn it transpires also has moons with hidden oceans. Saturn’s moon, Enceladus is thought to have an interior ocean 10km deep –found through gravitational anomalies.
Another of Saturn’s moons, the planet-like, Titan (which also has liquid methane lakes on its surface) conceals an underground ocean of water.
“…There is a highly deformable layer inside Titan, very likely water, able to distort Titan’s surface by more than ten metres,” indicated Luciano Less of the Università La Sapienza in Rome, who was the lead of author of a paper on the subject, published in Science magazine.
Four of the largest moons around Uranus; Ariel, Umbriel, Titania and Oberon boast similar watery oceans. Remarkably, Pluto, the dwarf planet on the outer edges of the solar system, may also harbour an undersurface ocean. In fact, a total of 23 moons and planets are said to have or have the potential for sub-surface water.
We’ve yet to find an organism that does not need water to survive and where there is water, there is usually life. It’s this premise that has made the discovery of so many hidden undersurface oceans on moons in our own solar system, tantalising for scientists. Is extraterrestrial life closer to home than we dare to imagine?Ice Rafting View of a small region of the thin, disrupted, ice crust in the Conamara region of Jupiter’s moon Europa showing the interplay of surface color with ice structures. © NASA/JPL/University of Arizona
“We are building upon the scientific insights received from the flagship Galileoand
Cassinispacecraft
and workingto
advance our understanding of our cosmic origin, and even life elsewhere.”
Research presented at The Lunar Planetary Science Conference in 2021 concluded underground oceans were likely to be common in other solar systems too and may be preferable environments for life to flourish compared to Earth’s above-surface oceans.
The smoking gun for life?
Without sunlight and without photosynthesis, the idea of life was once deemed impossible. However, the discovery of life at the darkest, deepest depths of the ocean around fissures over volcanic locations has changed that perception.
Hydrothermal vents, which pump superheated water in the deep ocean are typically found near volcanically active places.
On Earth, such places can be teeming with life. The vents create large towers, nicknamed smokers, which are effectively, natural chimney stacks on the seabed pouring out mineral-rich hot water. With no sunlight, aquatic plant and animal life can still exist in abundance around these towers, and this makes subsurface oceans on moons potential places where extra-terrestrial lifeforms could proliferate.
It’s been suggested, for example in a paper: ‘Hydrothermal vents and the origin of life’, published in 2008, that the vents could indeed be an evolutionary starting point for life.
To validate the possibility of life in such places, an analysis is needed of each ocean’s water composition.
The latest analysis of data on Enceladus, captured by NASA’s Cassini probe, which swung by the Saturn moon between 2004-2017, has created a renewed sense of excitement in the study of these subsurface
oceans. Huge jets of water are ejected into space from the moon, and the probe collected a sample of the particles from those events on its flight path through Saturn’s E ring, which is fed by the ejections. It discovered high concentrations of sodium phosphates, molecules of sodium, oxygen, hydrogen and phosphorus atoms.
The data meant that back on Earth scientists could recreate and simulate the conditions of the ocean. The scientists working on this experiment realised that the phosphorus concentrations were 100 times higher on Enceladus than in Earth’s seas.
The conclusion was that the concentrations would be possible if there was a cold seafloor or hydrothermal vents. The high phosphate concentrations would imply an interaction between rocky minerals and carbonated liquid water. It is also a good indicator that life is possible.
Frank Postberg, planetary scientist at Freie Universität Berlin, Germany, lead author of a study that was published in Nature – said recent research revealed, “… the clear chemical signature of substantial amounts of phosphorus salts inside icy particles ejected into space by the small moon’s plume. It’s the first time this essential element has been discovered in an ocean beyond Earth.”
Co-investigator Christopher Glein, planetary scientist and geochemist at Southwest Research Institute in San Antonio, Texas, added: “High phosphate concentrations are a result of interactions between carbonate-rich liquid water and rocky minerals on Enceladus’ ocean floor and may also occur on a number of other ocean worlds. This key ingredient could be abundant enough to potentially support life in Enceladus’ ocean; this is a stunning discovery for astrobiology.”
The towering core of NASA’s Europa Clipper spacecraft is shown in the storied Spacecraft Assembly Facility at the agency’s Jet Propulsion Laboratory in Southern California. Standing 10 feet (3 meters) high and 5 feet (1.5 meters) wide, the craft’s main body will be the focus of attention in the facility’s ultra-hygienic High Bay 1 as engineers and technicians assemble the spacecraft for its launch to Jupiter’s moon Europa in October 2024.
© NASA/JPL-Caltech“There is a highly deformable layer inside Titan, very likely water, able to distort Titan’s surface by more than ten metres”
New Missions
The European Space Agency (ESA) launched a rocket carrying a spacecraft in April 2023, that is heading toward Jupiter with the aim to make observations of the gas giant and the three ocean-hiding moons – Ganymede, Callisto and Europa, using a suite of remote sensing instruments. JUICE – which stands for the Jupiter Icy Moons Explorer will arrive at its destination in 2031 and spend three years researching the moons’ oceans.
In addition, in 2024 NASA will launch the Europa Clipper, which is a robotic spacecraft also designed to investigate Jupiter’s most famous moon with a task to determine if Europa has conditions that are suitable to sustain life, although this is not intended to directly detect life. The Europa Clipper will be the largest spacecraft to fly an interplanetary mission. It will embark on a six-year, 2.6-million-kilometre journey to Europa. It will fly by the moon 50 times whilst orbiting Jupiter using a variety of instruments to analyse the moon with every pass.
Thomas Zurbuchen, associate administrator for the Science Mission Directorate at NASA Headquarters in Washington said of the mission: “We are building upon the scientific insights received from the flagship Galileo and Cassini spacecraft and working to advance our understanding of our cosmic origin, and even life elsewhere.”
Whilst the hard ice surfaces of these moons are hundreds of degrees below zero, the deeper inside the moons, the warmer it becomes until the ice eventually melts into water. Just how far you have to descend before there is liquid is one of the questions that the JUICE and Europa Clipper missions are hoping to resolve. The oceans are not small either. Europa may have more than double the amount of water than all the oceans of Earth combined. There is a chance Europa could have underwater volcanic activity on a seabed that would help it support life and of course, there is also a possibility that it’s completely lifeless.
The proliferation of underground oceans in our own solar system, is a relatively recent revelation and in 2014, it became apparent that
Earth too, may indeed have its own huge hidden ocean, out of sight, beneath the mantel.
A scientific paper came to this conclusion, called: ‘Dehydration melting at the top of the lower mantle’ by a team of researchers at Northwestern University in Evanston, Illinois. It was proposed that a deep-water reservoir was in the Earth’s mantel transition zone between 410-660 kilometres down. The water would be stored in a substance called ringwoodite which acts like a sponge to trap water. It would be more water than all the above surface oceans combined and could even help explain where the seas above ground originated.
In the coming years, and with the missions and science now focusing heavily on subsurface oceans, we are going to find out a lot more about their relevance and the hope is we may discover life is more abundant than we had ever envisaged, only because it has been hidden out of sight, underground, in the darkness of ocean water.
“Europa is one of our best chances of finding life in our solar system.”This artist’s concept illustrates two possible cut-away views through Europa’s ice shell. In both, heat escapes, possibly volcanically, from Europa’s rocky mantle and is carried upward by buoyant oceanic currents. If the heat from below is intense and the ice shell is thin enough (left), the ice shell can directly melt, causing what are called “chaos” on Europa, regions of what appear to be broken, rotated and tilted ice blocks. © NASA/JPL/Michael Carroll Artist’s illustration of Jupiter and Europa (in the foreground) with the Galileo spacecraft after its pass through a plume erupting from Europa’s surface. A new computer simulation gives
us anidea of
howthe magnetic field interacted with a plume. The magnetic field lines (depicted in blue) show how the plume interacts with the ambient flow of Jovian plasma. The red colors on the lines show more dense areas of plasma. © NASA/JPL-Caltech/Univ. of Michigan Fantasy art of alien ocean. © NASA
A new deal for the European Central Bank?
The role of the European Central Bank has changed dramatically since 2010, as it has acquired new responsibilities. These transformations are deferred consequences of the rise of corporate capitalism, which parallels the evolution of federal government institutions in the US following the New Deal, argues Dr
The US federal government has existed in its present constitutional form since 1789, but its role has changed significantly since as society has evolved and new institutions have been established. The US government’s role in society and the economy changed dramatically in the early 1930s for example, as President Roosevelt pushed through the New Deal; parallels can be drawn between this period and more recent times in Europe, believes Dr Christakis Georgiou. “Something similar happened in the EU in the 2010s with the transformation of the role of the European Central Bank and the debate on an EU fiscal capacity,” he says. As the Principal Investigator of an SNSF-funded research project, Dr Georgiou is now exploring these parallels in the evolution of federal macroeconomic government institutions. “I am looking at the way in which the US federal government grew fiscally from the New Deal onwards, and at the ongoing debate in the EU about developing fiscal policy at the EU level,” he outlines.
Social sources of large-scale monetary and fiscal federations: An EU-US Comparison
This project has been funded by the Swiss National Science Foundation SNSF.
This article has been funded by the CCDSEE research centre, affiliated to the Global Studies Institute of the University of Geneva.
Project Coordinator, Christakis Georgiou Global Studies Institute Sciences II Université de Genève
T: +41 379 90 82
E: Christakis.Georgiou@unige.ch
W: https://www.unige.ch/gsi/fr/presentation/ enseignants/cer/christakis-georgiou/
W: https://www.unige.ch/gsi/ fr/presentation/centres-derecherche-affilies/le-centre/
Role of central banks
This research partly focuses on the European Central Bank (ECB), which was formally established in 1999. The ECB controls monetary policy in the Eurozone countries, while fiscal policy is decentralised to EU Member States, which it has been argued is a point of fragility. “This meant that individual member states could not count on the central bank to print money to make sure that they would pay back their debts under any circumstances,” explains Dr Georgiou. This raised the question of what precisely was the status of sovereign debt in the EU, and particularly the eurozone, an issue policy-makers tried to resolve through the no bail-out clause of the 1992 Maastricht treaty. “Policy-makers were attempting to signal to financial investors that there would be no bail-out in situations where Member States were unable to repay public debt.
happened because large financial corporations were pushing for a solution that ultimately meant the ECB had to step into the breach,” he says. Parallels can be drawn here with the 20th century transformation of the American government in terms of macroeconomic policy, which Dr Georgiou says was driven by the rise of corporate capitalism. “This is a stage in the history of capitalism where the economy is no longer comprised of many small scale businesses, but is rather dominated by a handful of oligopolistic large corporations,” he explains.
The rise of corporate capitalism is also driving the process of European integration, argues Dr Georgiou, and ultimately the macroeconomic problems the EU faces boil down to that transformation. This project has been relatively small in scale, now Dr Georgiou plans to write a book building on his findings, as well as to
The central bank would not make sure that they would get repaid,” says Dr Georgiou.
Many financial investors did not view this as a credible position and the Maastricht agreement was called into question, sparking debate started about how to essentially recreate the central bank-treasury nexus at the EU level. The corporate community played a major role in pushing for fiscal centralisation, as it did previously in making the case for the founding of the ECB, argues Dr Georgiou. “Some economists say that the ECB has taken on the role of lender of last resort. My argument is that that has
conduct research into the role of large European corporations around the Maastricht deal. “I’m particularly interested in exploring how the German government came to back the single currency. The proposal for a single currency essentially came from Rome and Paris, and Berlin was the capital that had to be persuaded to go along with it,” he says. Evidence suggests that large private banks in Frankfurt played the crucial role in persuading then-chancellor Helmut Kohl to sign up to the Maastricht treaty, now Dr Georgiou plans to dig deeper. “I want to uncover archival evidence,” he says.
Christakis Georgiou
Some economists say that the ECB has taken on the role of lender of last resort. My argument is that that has happened because large financial corporations were pushing for a solution that ultimately meant the ECB had to step into the breach.
Unlocking the potential of thermoelectric materials
Thermoelectric materials could play a major role in addressing energy sustainability concerns. The UncorrelaTEd project aims to break the relationship between the Seebeck coefficient and electrical conductivity, which is key to improving the efficiency of these materials and unlocking their wider potential, as Dr Jorge García-Cañadas explains.
The development of alternative sources of energy is widely recognised as a major priority, as countries around the world seek to address sustainability concerns and reduce their dependence on fossil fuels. More than 60 percent of power generated across the world is lost as waste heat, with virtually every industry contributing to some extent. It is estimated that more than 2,100 gigawatts of wasted energy were generated in the US alone in 2020, while it is thought that around 17 percent of the energy used in European industry is lost as waste heat. Thisrepresents a huge amount of energy and a vast potential resource to be tapped into. Recovering even just 10 percent of this waste heat will exceed the sum total of most current renewable energy sources (solar, wind, geothermal, and hydro energy), while there are abundant potential sources. In addition to waste heat, heat sources such as the sun or even our own bodies can be used to generate electricity.
This is where thermoelectric materials come into play. Thermoelectric materials can directly convert heat into electricity under safe, clean, andenvironmentally friendly operation by using temperature differences to produce an electrical current.
electrical conductivity of thermoelectric materials limits their overall efficiency. “The Seebeck coefficient and the electrical conductivity are connected. If you increase one then the other decreases,” continues Dr García-Cañadas.
“When you have a temperature difference across a material it can be converted into a voltage, then used to generate electricity,” explains Dr Jorge García-Cañadas, Principal Investigator in the Thermal and Electrical Systems Laboratory at Universitat Jaume I. However, the adverse relationship between the Seebeck coefficient – a measurement of the voltage that can be induced in a material per each degree of temperature difference – and the
UncorrelaTEd project
This issue is central to Dr García-Cañadas’ work as the Coordinator of the UncorrelaTEd project, an initiative bringing together six partners from across Europe. The primary aim in the project is to essentially break this relationship between the Seebeck coefficient and electrical conductivity, which could open up wider possibilities in the application of thermoelectric materials, moving them beyond niche applications towards the
There is waste heat all around us, from our own bodies to geothermal energy, representing a vast source of potential energy. Improving the efficiency of thermoelectric materials is key to harnessing this potential.
mainstream. “The idea is to develop a system to increase one without affecting the other, which will allow us to increase the performance of thermoelectric materials,” says Dr GarcíaCañadas. UncorrelaTEd researchers are developing a novel solid-liquid hybrid device, where a porous thermoelectric material is impregnated by a liquid electrolyte (a salt dissolved in a solvent). They are investigating how electrolytes can be used to improve the properties of the solid material. “Our main role is in preparing the devices and optimising the electrolytes,” outlines Dr García-Cañadas. “Initially we focused on understanding earlier results, where we used electrolytes to increase both the Seebeck coefficient and electrical conductivity in a material.”
A variety of electrochemical techniques and deposition methods have been used to probe the underlying factors behind these results, and the lessons learned will then be applied on three different families of materials. One of these families is bismuth telluride alloys, which are currently the most effective thermoelectric materials at room temperature, while Dr García-Cañadas is also working with oxides and conducting
polymers, looking to build on the earlier work. “The preliminary results were gained in a material with fairly limited thermoelectric properties. So even though we achieved significant improvements, we still didn’t reach very high overall efficiencies. The idea in the project is to use materials with good initial thermoelectric properties, and then to improve them further through interactions with the electrolytes,” he explains. “We
says they and polymers have certain other beneficial properties. “For example, oxides are usually more chemically stable and less sensitive to corrosion. They also tend to be cheaper and more abundant, while polymers are usually easy to process and are fairly flexible,” he says. Solid porous films of these different materials have been prepared by partners in the project, which was a technically challenging task. “The
think that the strategy we are developing in the project can be applied widely to different materials.”
This will now be tested on these three families of materials, which all have different properties of interest in terms of their potential application as thermoelectric materials. While oxides do not have the same thermoelectric performance as bismuth telluride alloys, Dr García-Cañadas
nanoparticles were prepared, then they were combined together to make a film,” explains Dr García-Cañadas. “This is essentially like a network of nanoparticles, and they need to be robust and mechanically stable.”
The solid materials have now been prepared, with a thickness of just a few microns, and Dr García-Cañadas is now starting to test the impact of different electrolytes on the Seebeck coefficient and
The idea in the project is to use materials with good initial thermoelectric properties, and then to improve them further through interactions with the electrolytes.
electrical conductivity. The starting point is the electrolytes that worked effectively in the initial experiments, then researchers plan to explore various different options. “We have a couple of interesting families of electrolytes, and we will look to assess the effectiveness of different combinations,” says Dr GarcíaCañadas. The earlier results were gained with different salts dissolved in solvents, while researchers are also investigating certain ionic liquids and modifying the electrolytes in different ways. “We can try to understand the influence of the cations and the ions for example, and to modify the concentrations. We can play around with different parameters to see which solutions are the most effective,” continues Dr García-Cañadas.
Material efficiency
This work is currently ongoing, with researchers still working to improve the efficiency of thermoelectric materials, which is ultimately key to the prospect of wider adoption. While thermoelectric materials are currently used in some areas where reliability is more of a priority than cost, for example in radioisotope thermoelectric generators to provide power in space missions, they are not yet widely applied as energy harvesters. “They are not yet widely used in energy harvesting, as efficiencies are still low. We need to get better materials with improved properties,” outlines Dr García-Cañadas. This would help make thermoelectric materials a more viable option for certain applications; Dr García-Cañadas believes they could play an important role in the Internet of Things (IoT) for example. “With the rapid growth of the IoT a lot of sensors will be required to connect different ‘things’, and these sensors will need power,” he points out.
The project’s work is still at a very early stage however, with researchers focused more on exploring these ideas and working towards a proof-of-concept than looking towards potential applications. Rather than looking for highly compact materials, researchers in the project are preparing highly porous films, which Dr García-Cañadas says is not a common approach in the field. “This is one of the project’s main contributions in terms of materials development,” he says. The aim now is to build on the earlier findings and improve thermoelectric efficiency in the three families of materials. “We expect to achieve improvements in some of these materials. We will measure efficiency and then we can move towards a proof-ofconcept,” continues Dr García-Cañadas. “If we do identify promising families of materials, or families of electrolytes, then they could be explored further in a continuation project.”
As mentioned earlier on, there is waste heat all around us, from our own bodies to geothermal energy, representing a vast source of potential energy. Improving the efficiency of thermoelectric materials is key to harnessing this potential, a topic that Dr García-Cañadas plans to explore further in future. “We need to do more experiments and to identify the optimal electrolytes. We are hopeful that there will be some form of continuation project beyond UncorrelaTEd.” he says. Some clear results, indicating the potential of certain electrolytes and materials to improve efficiency, will provide a solid foundation for further research. “Once we reach this point in the laboratory, we can then look to move forward,” says Dr García-Cañadas.
UncorrelaTEd
Solid-liquid thermoelectric (TE) systems with uncorrelated properties
Project Objectives
UncorrelaTEd project is aiming at developing a new device concept to efficiently convert heat into electricity. The new concept is based on the combination of a thermoelectric solid material with a tactically designed electrolyte (liquid with ions). This combination is expected to lead to a powerful technology with unprecedented efficiencies.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 863222.
Project Partners
• Universitat Jaume I
• IREC
• KTH Royal Institute of Technology
• University of Warwick
• Solvionic
• Specific Polymers
http://uncorrelated.uji.es/partners/
Contact Details
Project Coordinator, Dr. Jorge García-Cañadas
Edificio de Investigación 1
Univeristat Jaume I
Av. Vicent Sos Baynat s/n
12006 Castelló de la Plana
Spain
T: (+34) 964 38 7417
E: garciaj@uji.es
W: http://uncorrelated.uji.es/
Jorge García-Cañadas leads the Thermal and Electrical Systems Laboratory at Jaume 1 University, a role in which he conducts research into thermoelectric devices and electrochemistry. He has worked on a number of research projects, and has contributed to more than 50 articles in international journals.
The LHC delivers an enormous number of proton-proton collisions to the different experiments, but these collisions typically correspond to lower-energy physics processes (black line). It is not possible to record every collision, as this would produce an impossibly large amount of data for the experiments to process and store. Instead, the experiments preferentially select, or trigger, on rare occurrences. Typically, this means that the experiments record all of the high-energy collisions, and only a small number of low-energy collisions (blue area). In contrast, by focusing on the collisions which happen to have been recorded together with the triggering process, data is collected across the spectrum; this data is most relevant at low energy (red area). The exact point at which the blue and red lines cross is constantly evolving with the LHC collisions.
The cylindrical structure of
The amount of energy deposited in a given geometric region of the detector can then be summed, where the most common result is the production of two regions of significant energy deposition per collision, inferred to originate from two particles. Due to the selections applied as per the figure to the left, there is typically one collision that is very high energy in each recorded event, and the others are lower in energy. However, these lower energy collisions are numerous, and may be the key to discovering new physics.
Discovering New Science in the Noise from LHC Experiments
Deep underground, at the CERN facility beneath the French-Swiss border, is the renowned Large Hadron Collider (LHC), including CMS and ATLAS, two immense machines designed to detect new particles, behaviours and laws in physics that complement and stretch the Standard Model. When in action, they can create more data than is reasonable to analyse.
“The LHC collides protons at an astounding rate, there are more than 2 billion collisions occurring per second. Every one of those collisions may be producing some interesting new particle, but we can’t possibly store everything, it’s simply too much data!” explained Schramm.
Finding needles in the haystack
In 2012, researchers working with the LHC discovered the Higgs boson particle, a fundamental particle that gives mass to other fundamental particles like electrons and quarks. It was a breakthrough discovery at the time, only made possible by breaking boundaries on technological, and experimental limits.
Whilst the results were what was hoped for and expected, the team on the Data
Interpretation Strategy for COmplete Vertex Event Reconstruction in High Energy Physics (DISCOVERHEP) project is revisiting the raw data that was not being scrutinised from the experiments. The data they will analyse is specifically the low-energy regime in the noise, the huge amount of data that has already been recorded but has been ‘ignored’ to date – which they theorise, has the potential to harbour more scientific discoveries that were not being searched for originally.
“We are colliding roughly 60 collisions every 25 ns at the LHC. Each of those sets of 60 collisions is referred to as an event, or one snapshot of the detector, just like taking a picture with a very complex camera. In this event, we further discard all of the collisions except one, which is deemed to be the one interesting high-energy collision. The other 59 collisions are ignored, even though they are part of that same picture. It’s like taking a group picture, only to look at the tallest person; it may be where your eyes go first, but there are lots of others around who have their own stories to tell.
“My project flips this choice around: why not discard the highest-energy collision and look at all the low-energy collisions? This
provides a huge dataset, which can be used to look for the rare production of new particles.” This novel search strategy could power a new era of discoveries in physics.
Matter matters
Whilst the Standard Model is the best proposed ‘model of matter’, describing what particles make things and how those particles interact, there remain open questions and missing pieces to the scientific puzzle.
“The Standard Model is essentially a list of the particles that we know exist, and a list of how those particles can interact with each other,” said Steven Schramm. “Together, this gives us the building blocks that make up the world around us. While it is a fantastic tool, and it describes the world around us in our daily lives, the Standard Model is unable to describe the vast majority of our universe. All of the particles that we know about represent only five per cent of the universe, with the rest being labelled as ‘dark matter’ and ‘dark energy’ because we don’t yet know exactly what they are. We know with certainty that there is something else out there, but the Standard Model does not provide any answers.”
Steven Schramm and his team of researchers on the EU-funded DISCOVERHEP project are conducting an analysis of previously ignored ‘noise’ data of experiments with the Large Hadron Collider (LHC), searching for evidence of new physics in low-energy interactions.
the detector can be unwrapped in terms of its azimuthal coordinate and the pseudorapidity, where the latter is a transformed version of the polar coordinate.
A typical day of proton-proton collisions at the future High-Luminosity LHC, where there are 200 simultaneous proton-proton collisions delivered the ATLAS and CMS Experiments, leading to a very complex picture of what has taken place. A single collision is highlighted in the insert, where cyan shows the charged particles directly originating from the collision, yellow lines indicate charged particles which come from the decays of particles from the same collision, and pink lines indicate charged particles that come from the other simultaneous collisions. The current LHC “only” produces roughly 60 simultaneous collisions, but this number increases regularly.
Searching for low energy collisions
The project team, made up of five researchers, grasped the opportunity to investigate lowenergy collisions at high rates from data supplied by LHC.
“It is entirely possible that new physics is hiding at lower energy, but there are never any guarantees when looking for the unknown. There are reasons to believe that dark matter may lie in this energy regime, and there are other theories of new physics that could be present at low energy. This is why we are following a more generic approach: rather than relying on a specific interpretation of what new physics may look like, we are conducting searches that are as generic as possible; if we see something, we will then have to figure out what it is.”
forward at what any discoveries may lead to, solving the biggest scientific mysteries normally translates to innovations that benefit us, whilst broadening our knowledge.
“The gap between discovering new physical laws and a subsequent application may be long, but human ingenuity is strong. Discovering electromagnetism has led to our modern world, the weak nuclear force plays a significant role in radiation therapy, and the strong nuclear force will hopefully lead to clean energy through nuclear fusion. It is difficult to predict what the discovery of a new force would imply. Similarly, dark matter is five times more abundant in the universe than normal matter: understanding more about what it is may have profound implications that we cannot begin to guess at right now.”
DISCOVERHEP
Turning noise into data: a discovery strategy for new weakly-interacting physics
Project Objectives
The project involves looking at the dataset delivered by the Large Hadron Collider, and recorded by ATLAS, in a new way. Rather than focusing on the primary collision that caused the event to be recorded, that collision is discarded; all of the other simultaneous collisions, typically discarded as noise, are studied instead. This provides an enormous dataset of hadronic interactions, which may be the key to discovering rare new physics processes.
Project Funding
This article is part of a project that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 948254) and from the Swiss National Science Foundation (SNSF) under Eccellenza grant number PCEFP2_194658.
Project Participants
• Antti Pirttikoski
• Carlos Moreno Martinez
• Mário Alves Cardoso
• Vilius Čepaitis
Contact Details
Project Coordinator, Professor Steven Schramm
Department of Nuclear and Corpuscular Physics
University of Geneva
T: +41 22 379 63 68
E: steven.schramm@unige.ch
W:https://www.unige.ch/dpnc/en/members/ actual-members/s/steven-schramm/
Prof. Schramm received his PhD from the University of Toronto in 2015. He subsequently joined the University of Geneva as a post-doctoral researcher, and after obtaining an ERC Starting Grant and SNSF Eccellenza Professorial Fellowship, he became a professor in 2021. His research involves searches for new physics in high-energy particle collisions at the Large Hadron Collider, with an emphasis on hadronic interactions, machine learning, and advanced data analysis.
DISCOVERHEP is embracing a completely new approach to looking at data, which itself presents practical challenges. The team use Machine Learning to improve their understanding of the data and speed up complex calculations. It can detect the unexpected, through techniques such as anomaly detection or complex classification algorithms. It helps mitigate inherent biases about what new physics might look like. This highly exploratory approach does not presume or assume answers. Looking further
The team have managed to perform the processing of the full dataset and is subsequently able to demonstrate that it is suitable for the intended purposes: searching for low-energy weakly-interacting hadronic physics. The exciting next steps involve searching to find out whether there is evidence of new physics.
“We know that there is more to the Universe, and the key may be hiding in the LHC data, in a direction we just haven’t thought of yet.”
It is entirely possible that new physics is hiding at lower energy, but there are never any guarantees when looking for the unknown.Prof. Steven Schramm ATLAS Experiment © 2019 CERN
Life course research on formerly incarcerated men
Worldwide, at least one third of released individuals return to prison within two years. We spoke to Dr Anke Ramakers about her work in analysing the life courses of a large group of released prisoners in the Netherlands. She aims to gain deeper insights into the role of employment during prisoner re-entry into society and its relationship with reoffending.
While released prisoners often express a strong desire to get a job and confidence in their ability to do so, the figures tell a different story. “Most released prisoners are not able to find employment. I’m trying to look at the underlying reasons behind this,” outlines Dr Anke Ramakers, an Assistant Professor in the Department of Criminology at Leiden University. This research involves analysing data gathered on a group of prisoners prior to their incarceration, during it, and up to five years following release. These individuals participated in the HYPERLINK “https://www. prisonproject.nl/”Prison Project, a unique longitudinal and nationwide effort to collect longitudinal interview and administrative data among male pretrial detainees.
Role of employment
Dr Ramakers is now analysing these data, looking to gain deeper insights into the reasons behind the low employment rate. “When you only distinguish between people who are working and who aren’t, you are not really understanding the problem,” she says. The aim is to probe the different reasons behind joblessness among ex-prisoners, for example whether they are not allowed to work, are unable to, or simply lack the desire to find employment, which could then inform policy.
While in some countries the stigma of a criminal record is a major barrier to finding a job, Dr Ramakers says this is less of an issue in the Netherlands. “Criminal history is not accessible to everyone, potential employers don’t automatically have access to this information and most do not request employees to hand over a certificate of conduct. A lot of the individuals concerned report that they are actually pretty open about their criminal past. Some individuals may be inclined to withhold such information in order to increase their chances of gaining employment, but this stigma management does not affect their employment prospects.
“When I compare people who are open about their criminal records to people who aren’t I don’t find significant differences in employment outcomes,” continues Dr Ramakers. “What seems more important is
whether you’re physically and mentally able to work and your level of experience.”
A significant proportion of released prisoners did not have a job before they were imprisoned and struggle to find one afterwards, yet they still of course need to make ends meet. Without a steady, reliable income from formal employment, many exprisoners tend to get by through ‘packaging’ their income from several different sources, a new perspective Dr Ramakers is exploring in the project. In many cases ex-prisoners don’t make ends meet through formal employment, but rather through combining income from different sources, such as public assistance, informal work and illegal activity.
Public assistance is a stable source of income for many formerly incarcerated men. An increasing literature indicates that the ‘safety net’ improves outcomes for individuals who have been involved in the criminal justice system and society at large. Public assistance is typically not regarded as an aspect of integration, certainly not in the same way as
employment. Receiving benefits can be seen as a sign of failure and a work disincentive. Dr. Ramakers posits a different view, especially for the individuals who are the focus of attention in this study. As many of them wish to remain out of sight of the government and are unlikely to end up in stable employment, “A willingness to turn to the social safety net could be viewed as a possible sign of re-entry success, and steady income can weaken the temptations of illegal income ” she says. Dr. Ramakers therefore encourages re-entry policies that look beyond employment.
A willingness to turn to the social safety net could be viewed as a possible sign of re-entry success, and steady income can weaken the temptations of illegal income.
Reducing re-offending
The wider backdrop to the project is concern over re-offending rates and a desire to bring them down, which would bring significant social and economic benefits. To this end, Dr. Ramakers examines if labour market integration matters for reoffending. Most prior studies used somewhat crude measures of either employment or unemployment. This does not match the more complex reality; workers can end up in a wide range of jobs and non-workers can spent time trying to find a job, but can also be disabled or in school. One take away from Dr. Ramakers’ international research collaborations is that type of job and type of non-employment matters. For instance, stable employment is associated with lower offending rates, while informal work is associated with higher reoffending rates. The findings underscore the importance of a clear definition of labour market status in research and policy that aligns with the actual experiences of these individuals
The project’s research could help policymakers target resources effectively. With her research Dr. Ramakers hopes to contribute to the development of knowledge-based, realistic policies. Here she mentions the
often disappointing effects of ambitious (employment-related) re-entry programs. Her research sheds light on alternative strategies that might be more effective. In a related avenue of research, Dr Ramakers intends to investigate the question of the areas of the job market in which formerly incarcerated men are most likely to find a job. “It doesn’t seem like a difficult question but we don’t really know the answer at this point,” she acknowledges.
The idea is to ensure that any educational or occupational courses a prisoner undertakes while behind bars are aligned with their own experience and potential job opportunities once they are released. “I want to look at how they use the new knowledge they have gained when they return to society,” says Dr Ramakers. Too often policies and interventions are developed but not properly evaluated over a longer period of time. Dr. Ramakers encourages such evaluations in combination with a broader study of the life course of individuals who have been involved in the criminal justice system. Currently, we do not make optimal use of the available information, but life course research can reveal more alternative pathways towards post-release success.
IN A JOB, OUT OF TROUBLE?
Causes of joblessness after imprisonment and the consequences of employment outcomes for post-release reoffending
Project Objectives
Common belief and criminological theories suggest that employment can lower reoffending, and policy efforts should be directed towards connecting ex-prisoners to jobs. Yet, there is hardly any research among formerly incarcerated individuals that tests whether employment is feasible for this serious offender group and whether and why employment can represent a turning point.
Project Funding
This work was financially supported by a grant from the Netherlands Organization for Scientific Research (451-17-020).
Contact Details
Project Coordinator, Anke Ramakers
Assistant Professor of Criminology
Department of Criminology, Leiden University
Kamerlingh Onnes Building
Steenschuur 25, 2311 ES Leiden
T: +31 71 527 7362
E: a.a.t.ramakers@law.leidenuniv.nl
W: https://www.universiteitleiden.nl/en/ staffmembers/anke-ramakers#tab-3
Apel R. & Ramakers A.A.T. (2018), Impact of incarceration on employment prospects. In: Huebner B.M. & Frost N.A. (red.), Handbook on the Consequences of Sentencing and Punishment Decisions. ASC Division on Corrections & Sentencing Handbook Series nr. 3. New York: Routledge. 85-104.
Ramakers A., Aaltonen M. & Martikainen P. (2020), A closer look at labour market status and crime among a general population sample of young men and women, Advances in life course research 43: 100322.
Nguyen H., Kamada T. & Ramakers A. (2020), On the Margins: Considering the Relationship between informal work and reoffending, Justice Quarterly 39(2): 427-454.
Ramakers A. (2020), Geen VOG, geen werk? Een studie naar VOG-aanvragen en werkkansen na vrijlating, Recht der Werkelijkheid 41(1): 8-24.
Ramakers A.A.T. (2022), Secrecy as best policy?: Stigma management and employment outcomes after release from prison, British Journal of Criminology 62(2): 501-518.
Dr. Ramakers studies the re-integration of criminal justice involved individuals, with a specific focus on income-related questions. She is Assistant Professor of Criminology at the Institute of Criminal Law and Criminology at Leiden University. Part of her research concerns the relationship between (non-) employment and crime. She also examines the effects of imprisonment and social policies on employment and crime.
New tools to widen the appeal of the opera
The process of co-creation is central to widening participation in the opera, giving citizens a role in the development and production of new works. The Traction project developed new technologies and used co-creation to involve citizens in everything from developing stories to creating costumes to performing on stage, which will help renew opera as an art form, as Mikel Zorrilla explains.
The opera is an integral part of European culture, a dramatic, passionate art form with the power to move and inspire, yet it needs to reflect different voices across society if it is to retain its relevance. This issue lies at the heart of the Traction project, an EU-funded initiative which aims to engage communities in producing and performing opera, and in the process to widen participation. “We believe that co-creation is a key word in terms of widening participation. We aim to involve different people and communities in opera,” explains Mikel Zorrilla of Vicomtech, a partner in the project. People in some communities may not previously have felt opera was for them, an issue Zorrilla and the whole Traction consortium are working to address. “We are using this co-creation approach across three different trials in the project,” he says. “We have created an environment where opera professionals and people from different communities – El Raval in Barcelona, young inmates at a prison in Leiria in Portugal and a range of communities in Ireland – work together.”
Co-creation process
These types of communities are all at risk of exclusion from the opera, which is commonly viewed as being an elitist, inaccessible art form, despite its popular roots. In the case of El Raval, the Liceu opera house sits right in the heart of the neighbourhood, yet many local people don’t really think of attending. “Many local residents don’t go there, they think operagoers aren’t from El Raval,” says Zorrilla. One aim in the project is to challenge this perception through co-creating three exploratory operas, which had several stages. “In the first phase of the process we established a dialogue with
people in the community. Then we co-created the operas, rehearsed and produced the operas, and then finally we performed them,” outlines Zorrilla. “This community dialogue and co-creation has been done differently across the three different trials. For example, in Barcelona the librettist Victoria Szpunberg conducted a lot of interviews with different people across the Raval, listening to them and hearing their stories.”
This provided inspiration for the libretto that Victoria Szpunberg composed for the Barcelona opera, La gata perduda (The Lost Cat), while a different approach was adopted in Leiria. Regular sessions were held with the inmates in accordance with the prison schedule, following which the libretto was developed; while it
about what they were going to be asked to do, and what the outcome was going to be, were very important.”
The same holds true for opera professionals accustomed to high standards and commitment from everybody involved in a performance. In Leiria, the professionals would have liked to rehearse a lot more for the performance of O Tempo (Somos Nós) (Time (As We are) than proved possible, while sometimes inmates couldn’t attend due to disciplinary issues. “The nature of working in a prison meant it was essential to have a plan B,” acknowledges Zorrilla. The Co-creation Stage, one of the digital tools developed in the project, played an important role in this respect, helping connect people in different
We have created an environment where opera professionals and people from different communities – El Raval in Barcelona, young inmates at a prison in Leiria in Portugal and a range of communities in Ireland – work together.
might be expected that opera professionals would take most of the responsibility here, Zorrilla says the inmates played a full part. “The inmates decided who the composer would be and what the story would be about,” he says. The professionals have far more experience in writing and performing operas, yet they were being asked to cede a degree of control to people in these communities, so it was very important to set clear expectations. “People in these communities may not have been aware of what they were involving themselves in when they agreed to participate in the co-creation process,” says Zorrilla. “So their expectations
locations. “The co-creation stage allows people to perform together from different locations at the same time,” explains Zorrilla. “For example, we had a main stage in Lisbon for the performance. Some of the inmates were allowed to travel there with some guards, relatives and professionals. We also had a secondary stage in the prison, for those that were not allowed to travel, but this allowed them to be part of the performance.”
Two new digital tools
A second digital tool developed in the project was the Co-creation Space, which was
designed to promote the co-creation process via social media. This tool played an important role in the third trial within Traction, which brought together several communities from different areas of Ireland.
“During the Covid lockdowns the cocreation space was used a lot for workshops to work on the composition for the opera, called As an nGnách (Out of the Ordinary).
The composer Finola Merivale was based in New York, while communities at different locations in Ireland were working together on the composition,” outlines Zorrilla. There is wider interest in these digital tools, so Zorrilla says the impact of the project’s work will be felt beyond the end of the funding term. “We have explored other scenarios beyond opera, for example participatory theatre for remote audiences with the co-creation stage,” he says.
The VR Opera has been led by Irish National Opera (INO), which audiences will watch through virtual reality headsets. While this will be a highly immersive experience, bringing audiences closer to the characters and the music, people may also want to socialise and share their opinions of the performance, an issue which Zorrilla says has
been taken into account. “We have tested some exploratory formats with users, so that watching a VR opera is not an isolating experience. There is a kind of shared lobby like you have in an opera house where you can meet with friends, share opinions and learn more about the opera together,” he says. This is all part of demonstrating that opera does matter and showing how co-creation can be used to create inspiring works that resonate with audiences; some key guiding principles have been identified in this respect. “The principles of awareness, equality, ambition, responsiveness, passion and hopefulness are at the heart of co-creation,” says Zorrilla.
These principles are at the heart of the book and policy recommendations that have been produced in the project, while the partners are exploring ideas for new operas based on cocreation, following on from La Gata Perduda’s success in winning the best musical at the Max Awards. This will help establish a lasting legacy from the project’s work. “At Vicomtech we are working to improve the co-creation stage and the co-creation space, and each of the consortium members is thinking about how to continue with our research and our work,” says Zorrilla.
TRACTION Opera co-creation for a social transformation
Project Objectives
Traction aimed to contribute to opera’s renewal as a territory of cultural and social inclusion, by moving from the limited policy of cultural democratisation towards the more demanding idea of cultural democracy, which involves finding new ways for people at risk of social exclusion to co-create opera performances with professional artists, telling stories that are important to them, and reconnecting the form with its socially progressive potential.
Project Funding
This Project has received funding from the European Union Horizon 2020 research and innovation programme under grant agreement No 870610.
Project Partners
https://co-art.eu/about-us/who-we-are https://www.traction-project.eu/consortium-2/ Vicomtech has been the coordinator of the Traction project.
Contact Details
Mikel Zorrilla
Project Coordinator of Traction
Head of Digital Media at Vicomtech
Mikeletegi Pasealekua 57, Donostia-San Sebastian (Spain)
T: +34 943 30 92 30
E: mzorrilla@vicomtech.org
W: www.vicomtech.org
W: www.traction-project.eu
W: www.co-art.eu
W: https://co-art.eu/over-to-you/resources
Mikel Zorrilla is the director of the Digital Media & Communications department at Vicomtech and has led and participated in several national and international research and innovation projects. Previously, Mikel has held positions as a researcher in Ikerlan S. Coop. and in Vicomtech since 2007. Dr. Zorrilla has been an associate professor at Deusto University and Universitat Oberta de Catalunya.
Reconsidering Western European Drama in the Light of Puppet and Marionette Plays
Didier Plassard and Carole Guidicelli of the PuppetPlays project are painstakingly piecing together the fragmented evidence, manuscripts and source material for puppet and marionette plays from across Western Europe, from 1600 to today, to create an open-access online platform.
Puppet shows are widespread in Europe and have been for centuries. Despite this, very little research has been carried out on this unique art form, meaning that puppet stories have become the most mysterious, misunderstood and ignored of play formats.
Didier Plassard explained: “Puppet and marionette plays have been invisible because many of the text materials are unpublished or, when they were published, very few copies were made. Puppetry itself was for a long time despised as an art form, considered as entertainment for low classes or for children. Even if there is now more academic research in this field, it is mainly focused on the visual dimension of puppet theatre and neglects its dramaturgy.”
To complicate the challenge of piecing together a collection, many of the creators of puppet plays are not authors in the traditional sense and plays can develop in different ways, through practice and passing on the story as an oral tradition. Furthermore, the manuscripts that do exist for many classic puppet shows have been scattered throughout Europe.
“Many manuscripts are available in museums, libraries and private collections, but with great differences between the European countries. For instance, you can easily find thousands of manuscripts from the 19th and 20th centuries in big and small Italian cities. That is because the famiglie d’arte (families of artists) collected them and passed them down from one generation of puppeteers to the next. A great deal of resources can also be found in France and Germany, where some public institutions keep large collections of printed and unpublished texts.
“But there is a gap between the quantity of resources available in these countries compared to Great Britain, Spain, or Portugal where most of the ancient puppet plays have been destroyed or forgotten,” elaborated Plassard.
Often, original plays are written on poor materials that easily degrade and are too delicate to access repeatedly, such as those kept in the Museo Internazionale delle Marionette Antonio Pasqualino in Palermo.
These manuscripts dating from the early 20th Century were written on school notebooks with poor quality paper and had become incredibly fragile to the touch.
It has not been a simple undertaking. Plassard and his research team discovered that countries had varying degrees of information available on puppet plays and finding these plays around Western Europe and in history could prove challenging.
PuppetPlays’ first mission has been to locate manuscripts, storyboards, scripts, and rare books and pull them together as digital assets.
Then, together with his team, Plassard has described them in an accessible online database, searchable with keywords or key references. Each puppet play has a detailed description: the title of the play and the name of the author, as well as date,
editions or archiving location, the context in which it was written and performed (as far as can be ascertained), genre, plot summary, characters, puppet manipulation techniques, dramatic devices and stage techniques, thematic keywords. Queries in the PuppetPlays database can be made with filters, a timeline, or an interactive map of Europe.
With perseverance and persistence, the collection now consists of around 500 pieces, with an aim to collect and describe 1,000. From studying these art forms there are many insights that reveal themselves about the nature, value and craft of this unique method of storytelling.
More Than Words
Puppet plays encompass a spectrum of techniques and styles. There are many types of puppets: glove puppets, string marionettes, rod marionettes, shadow puppets, combined puppet-and-actor, and more.
The nature of puppet plays as a highly visual expression means that some plays rely on very little scripting or none. As the project collects the plays in all their formats and guises, some have to be represented in other ways than script.
The PuppetPlays project emphasises dramaturgy (pointing out dramatic devices, stage effects, running gags…) and the story cues that have evolved and changed through different interpretations and productions. Collecting the stories can mean, sometimes, gathering audio-visual elements, storyboards and elements of the production – not just manuscripts – to understand and document the play.
“Some plays are very difficult to describe and we have to invent a methodology and a new terminology to convey this visual connection,” said Guidicelli.
instruments which increase the power of theatrical performances,” enthused Plassard.
The Value of Performances
A benefit of cultivating a central hub of plays, as an online searchable portal, is giving the puppet plays prominence, and it makes them accessible both to experts and curious. It highlights to society the value they have as an art.
Puppet plays can have significance for expressing cultural identity, historical sentiments, rebellious societal tendencies, or religious narratives. Far from being childish
The use of puppets to tell stories can steer the narrative and the characters in very different ways from traditional theatre productions with actors, which makes them a unique and intriguing medium.
“Puppets can be used with very different repertoires and social views but, in whichever form they come in, they are magical
performances, puppet plays are engrained with historical context and are highly visual expressions of the feelings of a time.
Opera dei pupi, a Sicilian and South-Italian tradition of puppet shows using rod marionettes, drew its inspiration from the adventures of Charlemagne and his paladins. These stories were told over several months, in hundreds
A number of highly comic traditional figures give free rein to their violent instincts: the Neapolitan Pulcinella beats his opponents with a stick and tricks the executioner into hanging himself instead.By Alessandroga80 - Photo taken in Torino, Italy on Christmas Day 2012, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=23329127
of daily episodes where the audience identified with the epic heroes Orlando, Rinaldo, Angelica, and many others.
Many important playwrights in history have created works for puppet plays unbeknown to most. For example, Portugal’s famous playwright of the 18th Century, António José da Silva – a Jew burned to death by the Inquisition –wrote all his comedies for marionettes.
Puppet plays have distinct styles of entertaining and telling stories, according to the animation techniques, the kind of audience and the performing spaces.
“In the 18th Century, glove-puppets were performed in streets and marketplaces, and rod and string marionettes in theatres,” said Plassard. “In the 19th Century, this began to change when local characters were invented.”
In London, a strong cultural bond developed between the Punch and Judy’s performances and the audience. In France, Germany, Italy, Belgium, and Spain, such a bond was strengthened by the characters’ use of dialect or regional language (e. g. Guignol, Kasperl, Faggiolino, Tchantchès, Tia Norica).
The versatility of puppet formats makes them accessible and robust as a performance. They may have evolved over time but, at their core, they remain visually engaging, efficient and effective as a means of theatre.
The regional identity associated with the play was not the only connection with the
spectators, as the plays were designed to appeal to certain target audiences. In many popular traditions, themes of the plays would be revenge against the rich and powerful, the elite and controlling elements of society, such as judges, policemen and institutions.
“Puppet characters became heroes struggling against all these forces, so this was very easy for ordinary people to identify with them,” said Plassard. There is another aspect of puppets that plays a significant part in their success for storytelling: it is that they can literally represent anything.
inhabit the world,” added Guidicelli. Bringing together the puppet play materials into one place makes it easier to appreciate the arc of the development of themes and characters over time.
The Darkness of Puppet Themes
“A number of highly comic traditional figures give free rein to their violent instincts: the Neapolitan Pulcinella beats his opponents with a stick and tricks the executioner into hanging himself instead, and even Death and the Devil cannot get rid of him; when the baby cries too much, the traditional English Punch throws him out of the window and hits Judy, because she blames him; the French Polichinelle in Louis Edmond Duranty’s plays fills his stomach, cheats, beats and kills... and often ends up being carried off by the Devil. Rather than characters, they are impulsive forces devoid of morality,” said Guidicelli.
Devils, crimes, cruelty, birthing and unrelenting beatings can be played even to child audiences, with little concern about censorship. Puppet plays can allow darker, more disturbing themes to be communicated to wider audiences due to the ‘unreality’ and lack of realism of the characters.
“Because puppets have the ability to move easily from the inanimate to the animate and vice versa, they are also a remarkable instrument for depicting the limits of human experience, death or even ghosts. Puppets are incomparable to arouse what Freud called ‘the Uncanny’”, added Guidicelli.
In this respect, puppet plays are a highly unusual and quite unique way of storytelling.
Sharing Expertise and Insight
“I think that one of the strengths of puppetry lies in the possibility to mingle human and non-human characters: you can have animals, objects, and also abstractions, acting together and related together. Taking into account all ‘modes of existence’, as the philosopher Bruno Latour would say, is certainly one of the reasons why puppet theatre, nowadays, is getting closer to actors’ theatre, to the point that many contemporary plays are composed for living performers and for puppets,” underlined Plassard.
“Nowadays, more and more puppeteers are interested in writing or staging plays that explore the relationship between humans and animals or nature, tackling themes linked to the environment. Moreover, puppeteers, who are now very often visible on stage alongside the puppets and materials they bring to life, are raising new anthropological and ethical questions about the way in which humans
As well as compiling a centralised online hub of plays, the project organised two international conferences to bring the wider, fragmented community of puppeteers, historians and authors together with researchers to share their knowledge and art.
The first PuppetPlays international conference: Literary writing for puppets and marionettes in Western Europe (17th-21st centuries) took place in October 2021 at the University Paul-Valéry Montpellier 3. The second, at the same venue, was held in May 2023 and titled: Portrait of the puppeteer as author (17th-21st centuries).
“The first one was devoted to literary authors, playwrights for puppets, and examined their dreams – whether actually performed or not – of a specific dramaturgy for these instruments,” said Plassard. “The second was the opposite focus, devoted to the plays composed by performers and artists themselves. This encompassed
traditional plays from oral tradition as well as contemporary creations or group collective writing and storyboards. Some artists were also invited to present their creative process and to discuss it with us.”
By drawing together materials and experts the project illustrates and celebrates the sheer diversity of puppet performances from around Europe through the ages.
Many puppet plays are linked to very old families and steeped in tradition, whilst modern and contemporary texts have often broken away from tradition and reinvented their art. There is a growing interest for writers and performers who understand that puppetry can add a new dimension to a story on the stage.
The fact that puppetry and plays using puppets have survived the test of time is a testament to their power. Today, puppets can be used as key components to stories in mainstream theatre, in best-seller productions like War Horse, for example. The future of this storytelling will see some exciting developments.
“Some writers are very interested in puppets and have a strong connection with puppeteers. Some others state that their play is meant for puppet theatre, but they don’t have any precise idea about what kind of puppets to use: they want to express something about frailty or fantasy and they think that puppets might be the appropriate instrument for this,” said Guidicelli.
There will no doubt be many more plays created for puppets to add to the growing database and there will be new methods, new technologies and new formats of puppet theatre.
“We already see robots being used in plays,” quipped Guidicelli.
Plassard summarised: “In the past centuries, the puppet and marionette theatres were often the only kind of performing art that some audiences could attend: lower classes, countrymen, children, etc. But progressively, puppet was
considered a specific instrument with its own expressive potential, due to its visual qualities. Nowadays, it is very much a living performance, where every little choice of material is important, as is the way the performers manipulate them, interact with them, etc. With puppets you can do anything and everything, you can make them fly, you can make them kill themselves, you can tear them to pieces... The renewal of puppetry results from the mixture of actors and puppetry together on stage. Puppetry has become a means of expression in the same vein as dance, or the circus, it is a highly visual way of communicating through a performance.”
Ingeniously, PuppetPlays will harness the collective power of the crowd by bringing together communities of users to help in the collaborative transcription of uploaded handwritten documents in order to complement the database with a multilingual anthology.
The open-source, online platform will be bilingual (French and English). It consists of the database, the anthology, an a-z directory of authors, a presentation of animation techniques, and the academic productions of the project.
PuppetPlays
Reappraising Western European Repertoires for Puppet and Marionette Theatres
Project Objectives
The PuppetPlays project objective is to analyse Western European plays for puppets and marionettes, from 1600 to 2000, in order to examine how playwrights and puppeteers progressively developed a specific dramaturgy for this medium.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 835193.
Project Partners https://puppetplays.www.univ-montp3.fr/ en/team
Contact Details
Principal investigator : Didier Plassard didier.plassard@univ-montp3.fr
Project officer : Carole Guidicelli carole.guidicelli@univ-montp3.fr
T: +33 (0)4 11 75 71 84
E: claire-marine.parodi@univ-montp3.fr
W: https://puppetplays.www.univ-montp3.fr/en
Didier PLASSARD is full professor in Theatre Studies at the Université PaulValéry Montpellier 3. His research fields include avant-garde theatre, stage direction, dramaturgy, multimedia, and puppetry. He has been the Principal Investigator of PuppetPlays since October 2019.
Main distinctions: Prix Georges Jamati d’esthétique théâtrale (1990) Sirena d’oro (Arrivano dal mare!, 2012), Chevalier des Arts et des Lettres (2015).
Carole GUIDICELLI is a Doctor in Theatre Studies (University Paris 3 Sorbonne Nouvelle). She is a teacher, a researcher, and a dramaturge. She has been working as a Research Engineer for the PuppetPlays project since October 2019.
Publication: Surmarionnettes et mannequins / Übermarionettes and mannequins (Institut International de la Marionnette / L’Entretemps, 2013).
Analysing the impact of short-term rentals
A certain number of houses and apartments in European cities have always been available as short-term tourist rentals, but the advent of digital platforms like Airbnb has represented a tidal wave of expansion for this accommodation modality. The impacts on local communities are highly controversial. “Cities are becoming increasingly attractive for temporary populations, which is producing a structural crisis in the infrastructure and services that are supposed to meet the needs of resident, stable populations,” outlines Dr Antonio Paolo Russo, Professor of Urban Geography at Rovira i Virgili University, and Principal Investigator in SMARTDEST. This increase in the supply of short-term rentals could be leading to the exclusion of some local people from the housing market. “Part of the housing stock is basically no longer available for longer-term rental. Indirectly, this affects the price of rentals, and also property purchase prices,” explains Dr Russo. “This leads to the displacement of sectors of the resident population outside the neighbourhoods where houses are unaffordable.”
SMARTDEST project
This effect has been observed in several cities, particularly those that attract a lot of tourists, such as Venice, Barcelona, Lisbon or Amsterdam, among those treated in this project, but it is a general trend across urban areas across the continent (Valente et al., 2022). “The idea is to study population and commercial change in these areas. There are other effects beyond the housing market,
related to the occupation of public space, public transport and the general conviviality or atmosphere of these areas. These trends are affecting the ability of local populations to move, work and live in these cities,” he says. Researchers are probing deeper into these trends in the project; part of this work involves analysing data on AirBnB rentals in Barcelona and correlating it with other indicators. “We run a set of regression modelling to explore the relationship between the presence of short-term rentals and what we use to call residential or housing stability in Barcelona, which points to the proportion of residents that have been living in a location for at least five years,” outlines Dr Riccardo Valente, a researcher working on the project. Results have shown a clear relationship between the number of short-term rentals on the one hand and residential stability on the other, which is mediated by a number of indicators
related to housing costs such as mean income and income variability (see Figure 1).
Remarkably, workers in the tourist sector, the most precarious across the urban economy, are those who are at higher risk of having to move out the city where they work, as pointed out through another piece of statistical analysis (Valente et al., 2023). This work uses Barcelona as a case study but its insights can easily be extrapolated to many other urban areas subject to comparable trends of tourist pressure and labour regimes, especially in the south of Europe. In a nutshell, the tourist economy of European cities has been growing before the pandemic and is likely to grow further with the current recovery, but it so does at the expenses of the very workforce that supports this success.
Alongside analysing social exclusion issues, the project’s agenda also includes bringing different parties together in City
Growth in the number of properties available for short-term rent in European cities is having an impact on local residents. We spoke to Dr Antonio Paolo Russo and Dr Riccardo Valente about the SMARTDEST project’s work in analysing the social impact of short-term rentals, which could then inform the integration of tourism into urban policy.Figure 1: Conceptual model of residential displacement under the pressure of short-term rentals, used in SMARTDEST estimations. Authors’ own source.
Labs to share ideas and develop potential solutions. “In our Barcelona CityLab we talked with stakeholders and experts, including representatives from the real estate industry, housing associations and charities that are dealing with social exclusion. We designed together a model for analysing the risk of residential displacement,” says Dr Russo. The SMARDTEST CityLabs also provided a
policy and urban planning. The social impact of tourism is a prominent issue not just in Barcelona, but also several other cities, and Dr Russo hopes the project’s findings will hold wider relevance. “We hope that the case of Barcelona, and the solutions that were presented in this case, could then be applied to other contexts,” he says. This is not necessarily about trying to reduce the
SMARTDEST
Cities as mobility hubs: tackling social exclusion through ‘smart’ citizen engagement
Project Objectives
SMARTDEST is an EU-funded H2020 research project, bringing together 11 universities and 1 innovation centre from seven European and Mediterranean countries. It aims to develop innovative solutions in the face of the conflicts and externalities that are produced by tourismrelated mobilities in cities, by informing the design of alternative policy options for more socially inclusive places in the age of mobilities.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant agreement ID: 870753.
Project Partners https://smartdest.eu/partners/
Contact Details
Antonio Paolo Russo
University Rovira i Virgili
Coordinator of the H2020 - SMARTDEST project
Department of Geography / Faculty of Tourism and Geography
C./ J. Martorell 15, 43480 Vila-seca, Spain
T: +0034 977 558384
E: antonio.russo@urv.cat
W: https://smartdest.eu
VALENTE, R., RUSSO, A. P., VERMEULEN, S., & MILONE, F. L. (2022). Tourism pressure as a driver of social inequalities: a BSEM estimation of housing instability in European urban areas. European Urban and Regional Studies 29(3), 332–349.
https://journals.sagepub.com/doi/abs/10.1177/096 97764221078729?journalCode=eura
VALENTE, R., ZARAGOZÍ ZARAGOZÍ, B., & RUSSO, A. P. (2023). Labour precarity in the visitor economy and decisions to move out, Tourism Geographies. https://www.tandfonline.com/doi/full/10.1080/14 616688.2023.2172603
Antonio Paolo Russo Riccardo Valente
good opportunity to test the impact of policy interventions designed to reduce the risk of social exclusion. “We simulated how a given policy intervention – for instance rent controls – might affect our dependent variable, which is residential stability,” explains Dr Valente. “We were able to gather more evidence from this co-creative, participatory process.”
This research holds important implications for local authorities responsible for housing
number of tourists, but rather protecting the rights of local residents to enjoy their own cities. “We are looking at how cities can cope with being a tourist destination,” continues Dr Russo. “There need to be mechanisms in place, and policies that protect what we broadly call the right to the city – the right to stay there, to stay in the place where you want, to live close to your job, and to enjoy a good quality of life.”
Antonio Paolo Russo is Professor of Urban Geography at the Rovira i Virgili University, Tarragona. His recent research centres on urban transformations and global mobilities. Riccardo Valente is a Postdoctoral Fellow and Professor of Urban Geography at the Rovira i Virgili University, Tarragona. His research focuses on the analysis of processes of social disorganization, and their implications for public health and residential choices. His recent research centres on urban transformations and global mobilities.
Cities are becoming increasingly attractive for temporary populations, which is producing a structural crisis in the infrastructure and services that are supposed to meet the needs of resident, stable populations.Figure 2: Co-analysis session at the Barcelona CityLab. Authors’ own source.
Fresh insights into urban youth inequalities
Many young people face great challenges in finding employment and affordable accommodation in certain European cities, while in some locations access to proper education is limited, widening existing inequalities. We spoke to Éva Gerőházi, from Metropolitan Research Institute, Budapest about the UPLIFT project’s work investigating socio-economic inequalities in 16 urban areas across Europe.
The 2008 financial crisis had a significant impact on the housing and employment market, which in turn affected other sectors of the economy. While nobody was immune from the effects of the crisis, young people starting out on their careers were among the most severely affected. “Young people are among the most vulnerable in the employment market during a crisis,” acknowledges Éva Gerőházi, an economist at the Metropolitan Research Institute in Budapest. The crisis widened existing inequalities and limited the opportunities available to young people in terms of both jobs and accommodation, an issue at the heart of Gerőházi’s work in the EUbacked UPLIFT project. “The project is focused on urban social inequalities,” she outlines. “The project brings together 15 partners, including housing experts and specialists in several other social policy fields.”
Urban areas
This research is centered on 16 urban areas across Europe, with Gerőházi and her colleagues looking to build a deeper picture of inequality in different regions, with the aim of ensuring that the voices of young people are heard in the development of education, housing and employment policy. Reports have been written on the situation in each of these urban areas, while eight of them have been selected for more in-depth analysis.
“We conducted 40 interviews with vulnerable young people in each of these eight locations. 20 of them are between the ages of 15-29 now, while the other 20 were young in the course of the financial crisis,” says Gerőházi. “Through these interviews we tried to capture the position of these people who were young at the time of the financial crisis and look at whether they were able to catch up or not.”
These 16 areas have very different economic strengths and social housing policies, with researchers looking to dig deeper into the specific challenges they face and their impact on young people. One of the cities researchers
are looking at in the project is Amsterdam, which is experiencing a serious housing affordability crisis. “There is a high proportion of social housing in Amsterdam but there is still a lot of pressure on the housing market. This is because Amsterdam is a growing city, it has both internal and external migration,” says Gerőházi. “The demand for housing is much higher than the local rental system can provide, so a high proportion of young people are on the threshold of being vulnerable in the housing market, even if they are well educated and have a decent salary. They may have to spend a high proportion of their income on rent.”
The situation is very different in certain parts of central and southern Europe, where more young people tend to live with their families as they enter adulthood. This is partly a necessity, as they often can’t afford their own accommodation, but it’s also more culturally normal for a 25-year old to live with their parents. “The affordability problems facing young people in the housing market are much less problematic in these central and southern European regions, despite the fact that Northern and Western Europe are generally thought of as being more prosperous,” says Gerőházi. There are also significant variations in the employment market. “Young people are the most vulnerable in the employment market. But there are very big differences between Southern Europe and the West and North,” continues Gerőházi.
This vulnerability in the employment market seems to be fairly temporary for most young people in Northern and Western Europe. As they get older, gain qualifications and experience, they are able to establish themselves. “They are often able to gain a more stable position in the jobs market,” outlines Gerőházi. However, Gerőházi says the south was particularly badly affected by the financial crisis, and young people have remained vulnerable for longer. “The unemployment rates among young people in southern Europe remain very high. While the economic recovery following the financial crisis generally reduced problems in the employment market, southern Europe is an exception,” she outlines.
any power imbalances are fully considered. In the case of Sfântu Gheorghe, many members of the Youth Board came from less prestigious vocational schools, so much of the preparatory process was about empowering them and building their self-esteem, while it was also important to ensure they were comfortable when meeting the institutional stakeholders. “At the first meeting, the institutional stakeholders were not allowed to react to the young peoples’ opinions on the educational system. They had to listen first,” says Gerőházi. In Amsterdam, the youth cohort was very different. “It was comprised mainly of university students and people in the early stages of their careers, mainly educated young people, but still very vulnerable in the housing market” outlines Gerőházi.
This group were able to interact with decisionmakers much quicker than their Romanian counterparts, and they moved on to analysing the problem, co-creating solutions through a reflexive policy-making process, then testing, implementing and monitoring them. While the circumstances in each of these urban areas may be very different, Gerőházi says young people tend to face some similar problems across all of them. “There seem to be common problems, such as lack of information and access to services, as well as mistrust of service providers,” she says. Eight case study reports were produced on the basis of the project’s research, which provide guidance on how these issues can be addressed. “We look at how a reflexive policy-making
UPLIFT
Urban PoLicy Innovation to address inequality with and for Future generaTions
Project Objectives
The UPLIFT projects has multiple aims, first to accumulate knowledge on youth vulnerabilities in diverse urban locations of Europe in the domains of education, employment and housing and second, to create methodologies of Reflexive Policy Making piloting in four cities (Amsterdam, Barakaldo, Sfântu Gheorghe and Tallinn).
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 870898.
Project Partners
https://uplift-youth.eu/about-us/ourpartners/
Contact Details
Project Coordinator, Éva Gerőházi, senior researcher Metropolitan Research Institute 1093 Budapest, Lónyay u. 34
E: gerohazi@mri.hu
W: www.uplift-youth.eu
Co-creation
The project’s agenda includes not just conducting research into urban social inequalities, but also working with young people in these areas and giving them a stronger voice in policy development. In four of the eight cities where interviews have been conducted – Amsterdam, Barakaldo, Tallinn and Sfântu Gheorghe – a co-creation process has been implemented. “A Youth Board was set up, while in parallel a group of institutional stakeholders was formed,” explains Gerőházi. Eventually the youth boards and institutional stakeholders started working together to address key topics, which differed in each of the locations. “In Amsterdam and Barakaldo housing was the focus, in Sfântu Gheorghe it was education, and in Tallinn it was services for NEETs (young people Not in Education or Employment or Training),” says Gerőházi. A great deal of care has been taken to ensure that young people have the confidence to contribute to the co-creation process and that
process can be used to improve the effectiveness of local policies,” says Gerőházi.
The project’s research shows how this process of co-creation can be used to address social inequalities around education, employment and housing, taking into account the fact that local policies are often nationally driven. While local reflexive policy-making may not be a means to change larger structures, Gerőházi and her colleagues are trying to highlight changes that might be implemented under the current legal and institutional settings. “It’s more about the way that institutions behave and how they are regulated,” she says. A reflexive policy-making process will also encourage a general shift in attitude, which Gerőházi hopes will last beyond the end of the project. “This reflexive policymaking process creates a new energy and legitimacy behind changes to local policy, which we hope will continue on beyond the end of UPLIFT,” she continues.
https://www.frontiersin.org/articles/10.3389/ frsc.2023.1130163/full
Éva Gerőházi (MsC in Economics), the coordinator of the UPLIFT project works as a senior researcher for Metropolitan Research Institute, Budapest since 1997. During her career, she has gained rich experience in diverse fields of urban development, like energy poverty, rehabilitation of marginalised areas and welfare systems in different urban contexts.
Whom we call vulnerable may change city by city depending on the local housing and employment conditions, but all of them can be empowered to be agents of their decisions.
Looking at Europe’s social inequalities
The number of people at risk of poverty has increased in Europe over the last decade or so, undermining efforts to promote social cohesion across the continent. We spoke to Professor Rune Halvorsen about the work of the EUROSHIP project in building a deeper picture of social inequalities between countries and regions in Europe.
The European Union is committed to promoting social cohesion across its Member States, yet there remain significant disparities in income and opportunity at both the regional and national level, with people in some parts of the continent at higher risk of experiencing poverty and social exclusion. As the Scientific coordinator of the EUROSHIP project, Professor Rune Halvorsen is examining the welfare policies of different countries, looking to gain a wider perspective on social citizenship in Europe. “Are existing policies sufficient to protect people from the risk of poverty? How do existing policies frame people’s experiences?” he outlines. The project consortium brings together ten partners from across Europe to address these types of questions, using various data sources. “We are analysing policy documents and existing statistics, while we have also interviewed about 210 people who have experienced poverty in seven European countries,” continues Professor Halvorsen.
EUROSHIP project
This includes people who have been in and out of poverty throughout their lives, as well as others who may have fallen into poverty due to an accident or illness. Some of these people have relatively limited education, which Professor Halvorsen says leaves them at greater risk of falling into poverty, even if they are in work. “They find it more difficult to gain full-time employment, and to get a job with decent pay. So they are more often in temporary positions, on zero-hours contracts, and with unpredictable working hours and wages,” he explains. There are variations across Europe in the way people in this group are supported, with countries striking a different balance between carrot and stick. “Several countries have been eager to introduce so-called conditionalities for young people in return for the receipt of social assistance or minimum income benefits. For example showing that you have applied for a certain number of jobs,” says Professor Halvorsen.
The question is whether these conditionalities help young people find more stable employment and reduce poverty, or whether there should be a greater focus on education and training. There is also an ongoing debate across the EU about the extent to which governments
A further topic of interest in the project is the idea of a minimum income, which is the subject of intense debate in several European countries. The EU has considered adopting a legal framework directive on minimum income, to ensure a more even spread of income across
Are existing policies sufficient to protect people from the risk of poverty? We are analysing policy documents and existing statistics, while we have also interviewed about 210 people who have experienced poverty.
should invest in active labour market policy measures and social services designed to help people find work. “These might represent a cost in the short-term, but they may pay long-term dividends,” points out Professor Halvorsen. Finding a job is particularly challenging for those with care obligations, another topic Professor Halvorsen and his colleagues are investigating. “It can be difficult for people with unpredictable working hours to manage any care obligations,” he says. “Single parents are in a particularly difficult or precarious situation in that regard. Here we have found wide variations between countries with respect to the availability of nurseries and support for single parents.”
Europe, but there is a certain reluctance among some Member States, related to the common perception that people in poverty are at least partly the authors of their own misfortune. “For a long time there has been a profound ambivalence towards the poor, and a hesitation about the idea of introducing more generous minimum income protection,” outlines Professor Halvorsen. The current cost of living crisis challenges this perception, with disposable incomes being sharply squeezed, and to some extent this strengthens the case for concerted action. “The number of people at risk of poverty has increased in Europe over the last decade or so,” says Professor Halvorsen.
Social cohesion
The wider context here is the EU’s desire to foster and promote social cohesion in Europe. The project’s work represents an important contribution in this respect, providing a deeper picture of how social inequalities between the countries and regions in Europe are developing, and researchers have reached some clear findings. “While some continental European countries such as Poland and Hungary are coming closer to the standard of living in Northern and Western Europe, some southern European countries give more cause for concern. They are actually diverging more and more from Northern Europe,” acknowledges Professor Halvorsen. There is some degree of support amongst the general public for a guaranteed economic safety net, one of the recommendations to come out of the project’s work. This recommendation has been communicated to stakeholders at both the EU and national level throughout the project. “We have held several meetings to disseminate our findings,” continues Professor Halvorsen. Alongside these
meetings research has been ongoing into the underlying causes of poverty in Europe, which Professor Halvorsen says is set to continue beyond the project’s term. “There’s a clear need to better understand the driving mechanisms behind poverty, and the regional differences in Europe. There’s also a need to follow up and examine progress on implementing different aspects of the European Pillar of Social Rights (EPSR),” he says. “We are also interested in investigating the extent to which people at risk of poverty are involved in decision-making processes that affect them.”
The realistic goal for policy-makers dealing with questions around social citizenship is to reduce the risk of people falling into poverty, rather than to eliminate it alltogether. New monitoring measures are being developed in the project, which will play an important role in terms of assessing the impact of different policies on poverty levels. “These tools can also be useful for commission services and assessing progress towards the implementation of the EPSR, which was adopted by the EC in 2017,” says Professor Halvorsen.
End poverty in all its forms everywhere
Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all
Reduce inequality within and among countries
EUROSHIP contributes to Europe’s commitment to the Sustainable Development Goals in general and in particular goals:
EUROSHIP
Closing gaps in European social citizenship
Project Objectives
EUROSHIP has produced original, gendersensitive and comparative knowledge about the effectiveness of social protection policies targeted at reducing poverty and social exclusion in Europe. Focal points have been the roles of minimum income schemes, digitalisation of work and welfare, and the social citizenship of low-income and education women and men.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 870698.
Project Partners
• OsloMet - Oslo Metropolitan University (coordinator), Norway
• University of Milan, Italy
• University of Tallinn, Estonia
• University of Hamburg, Germany
• TÁRKI Social Research Institute, Hungary
• University of Florence, Italy
• Autonomous University of Barcelona, Spain
• Swiss Paraplegic Research, Switzerland
• University of Sussex, United Kingdom
• Social Platform, Belgium
https://euroship-research.eu/partners/
Contact Details
Professor Rune Halvorsen
Scientific coordinator of EUROSHIP Department of Social Work, Child Welfare and Social Policy
OsloMet - Oslo Metropolitan University
T: +47 67 23 80 78
E: runeh@oslomet.no : @EUROSHIP_EU
W: euroship-research.eu
Rune Halvorsen, PhD in sociology, is professor of social policy at OsloMet - Oslo Metropolitan University. His main interests are European and comparative welfare policy, social citizenship, EU social policy, and citizenship movements. Halvorsen’s central concern is to contribute new knowledge for the future social Europe.
How can violent extremism be prevented?
There is no universally agreed definition of violent extremism, yet it is commonly associated with some specific features. It often involves non-state armed groups for example, who may employ a-symmetric tactics to attack civilian targets. “That could mean tactics like bombs or suicide attacks,” outlines Dr Morten Bøås, Research Professor at the Norwegian Institute of International Affairs (NUPI). As the Principal Investigator of the PREVEX project, Dr Bøås is investigating the factors behind outbreaks of violent extremism through case studies on countries in three different regions; the Balkans, the Middle East and North Africa (MENA), and the Sahel. “These are areas where most of the features conducive to the emergence of violent extremism are present in one way or another,” he explains. “There are, of course, enormous differences between these regions, but they are all struggling, to varying degrees, with the fact that part of the population feels alienated by the state that they’re supposed to belong to.”
PREVEX project
This issue lies at the heart of the PREVEX project, in which Dr Bøås and his partners are looking at circumstances on the ground in countries across these three regions, with the ultimate goal of preventing violent extremism. While violent extremism is sometimes justified by the extremists on religious or political grounds, the path towards participating in it may start with some pretty basic grievances. “Data from the project suggests that the journey into extremism is often spurred by grievances concerning lack of employment or economic opportunities,” says Dr Bøås. The state may be perceived as having played a role in creating these grievances, while in some cases, it is also too weak to control local conflicts, leaving a space that violent extremists can then exploit. “This creates a local environment in which violent groups, inspired by extremist ideologies, can find ways to integrate into local communities. For example, by offering protection and support
in local struggles in areas like land rights and access to water,” continues Dr Bøås.
For example, the state is relatively weak in some countries around the Sahel, and local militias exert quite a high degree of control, allowing them to present alternative ideas about how life could be, often based on their interpretation of religious texts. However, even in environments conducive to violent extremism, most people do not become radicalised, which Dr Bøås says is an essential aspect of the project’s research.
“If you are to combat violent extremism, it’s important to understand the local sources of resilience against it,” he stresses. The aim is to identify the factors that protect against violent extremism, even in areas facing severe social and economic challenges. “If a village, region or even a country has a relatively long and unbroken record of political, social and religious tolerance and moderation, that’s an important barrier against violent extremism. But it’s not enough,” says Dr Bøås. “This tradition must be supported by figures
The events of September 11 were a huge shock, and it led to an intense focus on the manifestations of violent extremism, with researchers looking to understand why it occurs. We spoke to Dr Morten Bøås about the PREVEX project’s work in investigating the factors behind outbreaks of violent extremism, with the ultimate goal of helping prevent it.Photograph of the Menaka region in Mali. Credit: UN Photo/Harandane Dicko
of authority that are seen as transparent, legitimate and relatively non-corrupt.”
These figures of authority should also ideally be involved in producing a collective good viewed as important by the local community, generating a level of social cohesion which provides an effective barrier against violent extremist ideas. The Kingdom of Jordan, for example, has been able to maintain social cohesion and has experienced only limited outbreaks of violent extremism in recent years, despite sharing borders with Iraq and Syria. “In Jordan, there is a tradition of political moderation created by the custodians of the state. The Kingdom has a combination of religious, political, and social legitimacy,” outlines Dr Bøås. It’s essential to understand the sources of this existing local resilience if violent extremism is to be prevented, believes Dr Bøås. “If we become too obsessed with the manifestations of violent extremism, we lose sight of the fact that most people, even in the most enabling environments, are not interested,” he stresses. “Some people take great risks to resist violent extremism, while others resist more subtly.”
This might mean negotiating a greater degree of economic freedom for local people, for example, which helps build the foundations of community resilience. This type of subtle manoeuvring and horsetrading takes place all the time, and Dr Bøås says it is an important consideration in terms of the project’s overall agenda. “This is what we need to understand better and build up in these areas,” he stresses. This also suggests that focusing on more basic, everyday grievances could be an effective approach to prevention. “We can hold de-radicalisation workshops and talk about moderation and religious tolerance, but if the initial journey into these violent extremist groups starts
with other forms of grievances, then maybe it’s better to focus on that,” says Dr Bøås. “This means that we need to take a long-term perspective and look at what we can do to make the modern state more attractive in these areas than it is right now. That’s a long path to walk, but there are few alternatives.”
Preventing violent extremism
An early step is to support figures of authority and help them enhance resilience. However, it’s essential to do this unobtrusively to avoid the perception that they are agents of external intervention. The project’s findings will be shared with partners, including the UN Development Programme (UNDP), as well as organisations in the three regions which were the focus of PREVEX. “We put quite a lot of emphasis on presenting our findings in the countries where we have worked. We have held several dissemination events, from the Western Balkans, to Iraq, to Mali. We present our findings through various social media platforms but also – where possible – with presentations for key policy stakeholders,” outlines Dr Bøås. The aim is to make it more difficult for the agents of violent extremism to establish themselves. Dr Bøås has also looked at potential solutions, including the possibility of getting around the table with these groups. “These groups may be more open to negotiation than we may have originally believed,” he says.
The first step is to know whom to negotiate with, and while many smaller militias are operating in the Sahel, Dr Bøås says some leaders command respect both among other commanders and the rank and file. Groups in the other two regions may also be ready to negotiate, which Dr Bøås says is one potential avenue towards preventing violent extremism. “We will do whatever we can to help prevent violent extremism,” he stresses.
PREVEX Preventing Violent Extremism in the Balkans and the MENA Project Objectives
The overarching objective of PREVEX is to put forward more fine-tuned and effective approaches to preventing violent extremism. Focusing on the broader MENA region and the Balkans, context-sensitive, in-depth case studies of the occurrence and non-occurrence of violent extremism will be carried out and then brought together in a regional comparison. In doing so, PREVEX will seek to improve the understanding of how different drivers of violent extremism operate. Particular emphasis will be placed on how to strengthen resilience through investigating the non-occurrence of violent extremism in ‘enabling environments’.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no 870724.
Project Partners https://www.prevex-balkan-mena.eu/ prevexba-category-about-us/
Contact Details
Project Coordinator, Dr Morten Bøås, Ph.D
Norwegian Institute of International Affairs (NUPI), C.J. Hambros Plass 2D, P.O. Box 7024
St Olav’s Plass, 0130, Oslo,Norway
E: mbo@nupi.no
W: https://www.prevex-balkan-mena.eu
Dr Morten Bøås, Ph.D is Research Professor at the Norwegian Institute of International Affairs (NUPI). Bøås is the PI of the EU Horizon 2020 funded project ‘Preventing Violent Extremism in the Balkans and the MENA: Strengthening Resilience in Enabling Environments (PREVEX)’. Bøås works predominantly in Africa and the Middle East on insurgencies, civil war and its consequences for human security.
If we are to combat violent extremism, it’s important to understand the local sources of resilience against it.
Decision support for personalised treatment
Artificial intelligence (AI) tools could help the transition towards more personalised treatment of disease. Researchers in the REVERT project are developing a decision support system designed to assist clinicians in identifying the most suitable treatment for individual patients diagnosed with metastatic colorectal cancer (mCRC), as Professor Fiorella Guadagni explains.
Metastasis, the spread of colorectal cancer beyond the site of the primary tumour, poses a significant threat and can be fatal in severe cases. The location and number of metastatic lesions play a crucial role in determining the most effective treatment approach for the individual patient. Professor Fiorella Guadagni, the Director of the Biobank BioBIMTM and Biomarker Discovery and Advanced Tecnologies (BioDAT) Laboratory at the IRCCS San Raffaele in Rome and the Coordinator of the EU-funded REVERT project, emphasises the value of this information in predicting optimal treatment outcomes. The REVERT project aims to consolidate colorectal cancer data from different sources to improve disease management. “We want to use existing biobanks that store comprehensive colorectal cancer datasets and assess their potential in identifying the key factors contributing to effective responses against mCRC,” she explains.
REVERT project
The REVERT project has access to data on metastatic patients from biobank samples across Europe, including information on treatment. Significant efforts are being made within REVERT to harmonise the data and ensure compliance with modern data protection standards. The next crucial step is to consolidate this data into the REVERT database for detailed analysis. Professor Guadagni explains, “We already have retrospective information from various studies that allow us to determine the response of patients to certain front-line therapies. The project’s IT specialists have analysed this data to predict treatment response.”
This valuable information will drive the development of a decision support system to determine the optimal treatment option for individual patients, which will then be tested in a clinical trial involving patients with mCRC [see box-out text]. Professor Guadagni continues, “The project will develop different algorithms to identify the most appropriate treatment course for patients with specific characteristics.”
In parallel to the clinical trial, an in vitro
study is being conducted with organoids, miniature organ-like structures. The aim of this study is to evaluate the algorithm’s ability to accurately predict the response of these cells to initial treatment. Professor Guadagni emphasises that the in vitro study provides
known, and then validated with a separate dataset to ensure accurate discrimination.”
While the system is designed to support clinicians, the ultimate responsibility for determining the most appropriate treatment for individual patients rests with the medical
additional opportunities, stating, “We can compare the effectiveness of the two most commonly administered first-line treatment options and explore the efficacy of other drugs or new combinatorial therapies.”
Identifying potential new combinatorial therapies is a major focus of the project. Professor Guadagni adds, “We wanted to ensure that there was enough time for clinical validation of the algorithm. It is trained with data whose treatment outcomes are already
professionals. Recognising the expertise of clinicians, Professor Guadagni highlights the importance of a clear basis for the system’s recommendations. She explains, “No one will use a decision support system without understanding the rationale behind its suggestions. This is called explainable AI.” Even if the system suggests a particular combinatorial treatment based on patient characteristics, clinicians have the option to consider alternative actions. Professor Guadagni explains, “The
There are several biobanks storing data on colorectal cancer cases. We want to use them, evaluate them, and see if they can help us find out which factors lead to an effective response against metastatic colorectal cancer.Graphical representation of the REVERT Project Design. Retrospective Studies Artificial Intelligence Real World Data and Biobanking Next Generation Sequencing Tumour Organoids Molecularly Imprinted Polymers
system informs clinicians about patients who will respond effectively to first-line treatment and provides the best combination of treatments under certain circumstances. The system empowers clinicians to modify or adjust the treatment while considering the profile of the individual patient.”
Wider relevance
The project’s research focuses primarily on colorectal cancer due to the availability of a large dataset and its significant impact on mortality rates. However, Professor Guadagni highlights that the results of the project are also relevant for other medical conditions, particularly other forms of cancer. She explains, “These algorithms have the ability to predict an individual’s response to treatment based on the available dataset. The algorithms can be applied to datasets associated with different diseases, they are not limited to colorectal cancer.”
Close collaboration between IT specialists and physicians is essential in achieving this and harnessing technology’s potential to improve medical treatment. Professor Guadagni emphasises the multi-disciplinary nature of the project, stating, “We have been working with computer scientists for more than 20 years, fostering mutual understanding and jointly identifying the necessary steps to achieve our common goals.”
Ongoing research also looks into understanding why certain patients respond favourably to therapies while others do not. Any new insights gained can be incorporated into the decision support system. Extensive sample analyses are being carried out,
using metabolomics technologies and next generation sequencing (NGS) to gather comprehensive information. Professor Guadagni stresses the effectiveness of NGS in identifying differences between patients, which contributes to the ongoing development of tools and systems that assist clinicians in treating colorectal cancer more effectively. However, while artificial intelligence can play a major role in improving medical treatment, Professor Guadagni emphasises the importance of clear boundaries. She stresses that it must be ethical, non-discriminatory, fully controlled by clinical doctors, and explainable.
Considerable care has been taken to ensure that the research is conducted in compliance with current regulations while enabling efficient data-sharing among partners and a wider network of interested parties. This is key to the continuous development of AIbased healthcare tools, a complex endeavour with numerous factors to consider. Professor Guadagni acknowledges the importance of a dedicated support group consisting of specialised individuals, including lawyers and IT experts. She underlines their commitment to adhering to the existing rules, despite the complexities involved.
Validation Clinical Trial
A clinical study has been launched which focuses on testing and validating the predictive model developed in the REVERT project in an oncology setting. The aim is to improve clinical practice and management of healthcare services for cancer patients.
This study is focused on colorectal cancer, which is the third-most common form of cancer in men and second amongst women, and is a correspondingly high priority in research. It is estimated that colorectal cancer accounted for 12.7% of all new cancer diagnoses across the EU in 2020.
The aim is to help move towards a more personalized mode of treating patients with unresectable metastatic colorectal cancer by identifying the most effective intervention for each individual, with researchers working to validate the clinical decision support system developed in REVERT.
“The clinical study will test the predictive efficacy of Artificial Intelligence on a case-by-case basis from a ‘personalized’ perspective,” explains Prof. Mario Roselli, Director of the Medical Oncology Unit at the University Hospital Tor Vergata, where he also serves as a Full Professor.
“The predictive algorithm has been previously ‘trained’ through retrospective evaluation of the clinical profiles of patients already treated at the Oncology Units participating in the project and who, based on their response to treatment, have been defined as ‘responder’ or ‘non-responder’.
“This algorithm, applied to new patients enrolled in the clinical study, will allow investigators to be supported in choosing the best therapeutic option.”
REVERT
taRgeted thErapy for adVanced colorEctal canceR paTients
Project Objectives
To build an advanced AI-based decision support system for an innovative combinatorial therapy model based on a personalized medicine approach, to identify the most effective therapeutic intervention for each individual colorectal cancer patient.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 848098.
Project Partners
• IRCCS San Raffaele Roma (Fiorella Guadagni) • Local Health UNIT 4 - ProMIS (Lisa Leonardini)
• Malmo Universitet (Anette Gjörloff Wingren)
• Genxpro Gmbh (Björn Rotter) • Bundesanstalt Fuer Materialforschung Und Pruefung (Knut Rurack) • Umea Universitet (Jenny Persson) • Biovariance Gmbh (Tanja Lucas) • Fundacion Universitaria San Antonio (Horacio PérezSánchez) • Institutul Regional de Oncologie Iasi (Gafton Bogdan) • Servicio Murciano de Salud (Pablo Conesa-Zamora) • Luxembourg Institute of Health (Monica Marchese) • Clusterul Regional Inovativ de Imagistica Moleculara si Structurala Nord-Est (Cipriana Stefanescu) • Olomedia Srl • University of Rome “Tor Vergata” (Mario Roselli, Fabio Massimo Zanzotto)
Contact Details
Project Coordinator, Fiorella Guadagni, M.D., Ph.D., MDA Professor San Raffaele Rome University Scientific Coordinator Interinstitutional Multidisciplinary Biobank (BioBIM) SR Research Center - IRCCS San Raffaele Via di Val Cannuta, 247 00166 Rome - Italy
E: fiorella.guadagni@sanraffaele.it
Fiorella Guadagni, M.D., Ph.D., MDAFiorella Guadagni is full professor in Clinical Biochemistry and Molecular Biology at the San Raffaele, Roma Open University and Scientific Coordinator of the BioBIM of the IRCCS San Raffaele Roma. Her major scientific interest is translational research, which has led to the publication of roughly 260 papers in international peer-reviewed journals.
Transforming Public Services in Europe with eID
Ms Alicia Jiménez González, Project Coordinator, describes how the IMPULSE project is proposing the latest technologies, including blockchain and Artificial Intelligence (AI), to create electronic identification (eID) to access public services in Europe on a more secure, accessible and privacy preserving way.
The processes for accessing services
in Public Sector institutions in Europe can benefit from utilising the latest technologies to verify citizens’ identities. The European project IMPULSE (Identity Management in PUbLic SErvices), is an initiative involving sixteen entities spread across nine European countries, which is developing and assessing a new innovative, universal eID format.
The coordination is conducted by a research centre, Gradiant, an organisation that specialises in technology transfer from research to industry, turning innovations into commercially feasible initiatives, focusing on functionalities and the needs of the end users.
The project has been working on the requirements, levels of acceptance and the impact of the eID method, whilst taking into account the regulatory, technical and operational needs associated with it.
“In IMPULSE, our main aim is to contribute to the digitalisation of the public administrations and for that, we propose this electronic identity, which we have developed following the Self-Sovereign Identity (SSI) approach,” said Ms Jiménez González, working on IMPULSE at Gradiant.
A unique approach to eID is proposed where the traditional third-party ownership and management of identification data, for example, biometric data archived with a government department, is replaced by citizen ownership, so people can own and control their own personal data. This approach can be made possible with blockchain technology. Blockchain technology presents an elegant solution with peer-validated data.
the data of their identity, which means the data can never be misused, abused, sold on, or compromised.
During the onboarding process for generating the eID, facial recognition and Optical Character Recognition (OCR) technologies are used. If recognition does not match between the selfie taken for verification and the image of the person on the ID document, then the service
Typically, for a citizen to access public services online, a gateway to check identification is via a username and password, PINs or electronic signatures. However, in these systems of identification, users’ data belongs to a third party, which is tasked to validate identity. By using blockchain technology as a future solution, the aim is that the person is the owner of
provider will need to further validate the ID. Once the eID is generated, taking a simple selfie gives a unique image for verification, and the analysis of the facial biometric is empowered with an AI algorithm that can detect liveness or if something is not authentic with the image.
“What we aim to do is to benefit the public services by using this kind of
In IMPULSE, with the use of blockchain, you are the owner of your data. Furthermore, we included smart contracts in the project that allow people to manage their consent on how they use the different data of their identity.
electronic identity, in order to allow citizens to have more secure and easier access to public services.”
Reducing workload, accelerating process
The Covid-19 pandemic revealed that public administration relies too heavily on paper identification and in-person checks to authenticate someone to receive a service, whether by emailing copies of identification or images of documents back and forth, or having to see someone at a location, to identify them.
Replacing these slow, outmoded verification methods with a faster, more accurate online process will accelerate processing and remove administrative blockages to services.
Ms Alicia Jiménez González adds: “This will also contribute to the reduction of the workload of civil servants and administrations.”
AI for facial recognition and document scanning are technologies already largely accepted by the population although surveys by the project indicated that there can be inherently negative perceptions of blockchain, which have come about from an association and distrust of cryptocurrency. Despite this, significant advantages and benefits become apparent when using blockchain in the context of retaining privacy and control over personal identification, which can be communicated.
“In IMPULSE, with the use of blockchain, you are the owner of your data. Furthermore,
we included smart contracts in the project that allow people to manage their consent on how they use the different data of their identity. This also meets compliance with GDPR requirements, as this is personal data.”
For such a transformative solution to become accepted, the project must understand how effective it is in practice and how users will adapt to the technology.
To this end, the implementation of six pilot studies was carried out to analyse the use of the eID in the context of accessing different public services.
Six pilot case studies
IMPULSE is a proposal for a universal solution for accessing different public services. A single method of electronic identification must be compatible with all the case studies, no matter which public service requires verification.
In Spain for example, there are two different pilots to trial its effectiveness. One is for a Citizens card, so citizens can access a multitude of services in a city in a modern, efficient and secure way. Another case study was undertaken by a law enforcement agency in the Basque Country where submissions for low-level complaints of criminal incidents filed to the police were made more efficient through secure online identification.
“For that low-level kind of crime report, they would like to automate the process to achieve greater efficiency and effectiveness in citizen service delivery” said Mr Iñaki Gangoiti, leader of the Ertainza case study.
A knock-on effect sought by the new eID is to lessen the burden on limited public sector resources, especially to reduce the
(February 2023).
IMPULSE Identity Management in PUbLic SErvices Project Objectives
The aim of IMPULSE is to provide an identity management model powered by blockchain, shifting the model of personal data ownership from government to the owners (SSI: Self-Sovereign Identity, either individuals or corporate) in order to users of public administrations services. Feel secure and confident when sharing personal data and trust the Digital Single Market (DSM). This model implies moving from government departments holding separate versions of a person’s data, to a user-managed identity (aligned GDPR). For that IMPULSE combines the bottom-up approach of co-creation with the need for a universal vision of digital identity ethics in providing public services. The focus of the IMPULSE research is on evaluating the benefits, but also risks, costs and limitations, considering socio-economic, legal, ethical and operational impacts, together with framework conditions to introduce this new eID models in the public sector.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101004459.
Project Partners
https://www.impulse-h2020.eu/consortium/
Contact Details
Project Coordinator, Alicia Jiménez González
Responsable de Programas Europeos Head of European Programmes
T: (+34) 986 120 430 Ext. 229
E: ajimenez@gradiant.org
W: https://www.impulse-h2020.eu : https://twitter.com/Impulse_EU : https://www.linkedin.com/company/ impulse-project-h2020/ : https://www.youtube.com/@impulse_EU
need for human interaction so employees of public services, can prioritise and direct the majority of their stretched resources toward the most serious incidents.
The use case in Denmark focuses on the access to lockers deployed in the city, to improve access for vulnerable citizens to public self-services.
“To open and access the lockers they require a secure system to ensure you are the person allowed to collect, for example, a passport stored in them,” said Mr Jakob Asmussen, leader of the Danish case study.
Another case study in Bulgaria for public administration is around requesting online certificates, for example, to verify a current address or to create a local registry for which the eID can be used.
integration of the eID and at the same time demonstrate how it will help the public sector and providers.
“We detail the application of the IMPULSE eID solution to the specific case studies, whilst analysing the risks and the gaps. IMPULSE is aligned with the European Identity initiative, so we are working to establish the basis, to be prepared for their future implementation, also being compliant with GDPR and policy regulations,” said Ms Bertille Auvray, leader of the roadmap preparations at case study and European levels.
In parallel with those roadmaps, there will be a socio-economic impact analysis.
Alicia Jiménez González is a telecommunication Engineer with a master in International R&D&i project management. She has worked for the last 12 years as European Project Manager, currently leading the EU department at Gradiant. She has experience in digital transformation projects in different sectors, like public domain, industry, health and primary sector.
“Whilst in Italy,” explained Mr Marco Vianello, leader of the Italian case study “we worked with public administration managing companies’ information. This involved the personal identification of a representative of a company, so that an individual can request services or take actions representing the company they work for.”
Finally, in Reykjavik there is the case study aimed to support digital innovation and networking in online public services, designed alongside a portal on the Better Reykjavik civic participation platform, to discuss online accessibility issues experienced by people with certain disabilities.
Whilst outcomes are being assessed, one piece of valuable feedback has shown that although the initial setup for the eID can take longer, once it is established, processing public services is noticeably faster every time after the setup.
The roadmap for Europe
As part of the project, IMPULSE will develop national and EU roadmaps, to plan the
The socio-economic survey will likely demonstrate anticipated benefits on economic costs, for example, on the number of hours a civil servant must take to process the citizen request in person and many hours can be saved, to allow for prioritisation of other duties.
“We are seeing the benefit is not just for the citizen but also for the civil servant because it reduces the paperwork they have to do and the cost from the public administrations,” remarked Mr Nicholas Martin.
In the latest development, in June 2023, the IMPULSE APP was launched for ID verification, which reduces the manual entry steps needed in the checking process. It has the ability to detect forgery in scanned documents used for the onboarding process (i.e. national ID cards and passports).
The use of eID has great potential to drive enormous efficiency through the European economies. This project can bring much-needed technologies to digitise public sector services, making it easier and faster for citizens and civil servants to fulfil their needs.
Working Towards a Revolutionary Computer
Professor Erik Folven is working on the SpinENGINE project, full project name:
In all kinds of scientific fields, the ability to harvest huge amounts of raw data is exponentially improving. It has put demands on the processing and analysis of data that are often beyond the reasonable capabilities for conventional computing methods. Whilst neural networks have been a game-changer for accomplishments in, for example, Artificial Intelligence, the energy required for training advanced models is high, indeed too high to be sustainable. The energy cost of AI, for example, is said to be doubling, in terms of global progress, every 3.5 months. Thus, new kinds of energy-efficient hardware need to be explored.
A magnetic solution
Neuromorphic hardware, the kind that the SpinENGINE project is attempting, works on the premise of parallel data processing of very large amounts of data at once, instead of in sequence, one step at a time. This could be the key to making headway in the next generation of low energy computers. The computational demands for data analysis required with many tasks today
would benefit from this envisaged new kind of computer hardware. The interdisciplinary team of researchers working on the SpinENGINE project is reimagining how a computer can be built, with nanomagnets, to be energy-efficient and much faster than conventional computers.
Nanomagnets can be in different states depending on their magnetization direction. By applying a magnetic field to a network of nanomagnets, they will change state depending on the properties of the input field and the states of the other magnets nearby.Nanomagnets are non-linear in their response to most stimuli such as magnetic fields, electric currents, temperature, and laser pulses, and could be more than 100,000 times more energy-efficient than conventional computers. They are also capable of storing information without a power source, scalable in their architecture and relatively easy to fabricate.
Professor Erik Folven and a multidisciplinary team of scientists across Europe working on the SpinENGINE project, are experimenting with nanomagnet ensembles
to understand their viability and their farreaching potential. The project aims to demonstrate emergent behaviour from these nanomagnet ensembles in a non-linear way based on ‘reservoir computing’ – which is inspired by the workings of biological systems like neural networks in brains.
Reservoir computing has its roots in machine learning, where it has outperformed state of the art methods for a range of computational tasks. It relies on neural networks and hinges on the concept of a ‘reservoir’ of ‘neurons’ that does not require training and can process large amounts of data at the same time. It becomes a computational substrate that transforms input data into high dimensional representations – specifically making it excellent at capturing or recognising patterns. Simple linear processing techniques can then be trained to extract an output. Reservoir computing is particularly suitable for machine learning of time varying phenomena and is being considered for sensing and prediction applications which are useful from a smart sensing perspective.
Harnessing the Emergent Properties of Nanomagnet Ensembles for Massively Parallel Data Analysis. Specialized computer chips may in the future be based on magnetism and have an architecture inspired by the brain.The SpinENGINE team.
Mimicking the brain
It is argued that many nanomagnets working in parallel could be used to analyse large, complex datasets far more energy efficiently than conventional computing methods, like complementary metal–oxide–semiconductor (CMOS) hardware which suffer from the vonNeumann bottleneck.
Nanomagnetic computer devices could retain their state and information, even when they are switched off as they rely on magnetisation for data storage, and so their behaviour allows for efficient energy use compared to traditional computer hardware. For a new computer design, it’s a very attractive design concept.
Folven explains an analogy to demonstrate reservoir computing in his mind’s eye: “It makes me think of murmuration, when birds flock together in vast numbers, moving together, flowing with purpose together, influencing each other and giving rise to an emergent behaviour. We are trying to mimic a way of computing that behaves a bit like a brain, only with magnets with magnetic pathways instead of biological, connecting neurons. Imagine when we were children and we played with magnets and they attract or repel and then imagine thousands of these all connecting this way a bit like the neurons of a brain, all working both together and in parallel.”
It is a radical approach to computer hardware and computing, and the project represents the foundational steps in the journey to a new
computing architecture designed for a new age of computational challenges.
“There is a philosophical discussion about ‘what is computing?’ at the heart of it,” muses Folven. “We are asking how we can achieve tasks in other ways than using standard computer technology. The physicists and the material scientists, like myself, try to
potentially do specific tasks in a much more energy-efficient manner. The point is that today, we are pushing current technology almost beyond its limits, so I think there is room for some quite different technologies, I’m thinking especially of hardware accelerators, edge computing, smart sensors and those kinds of things.”
understand magnetism on the nanoscale –we try to understand how to tailor and tune magnetic materials by design.”
With envisaging this new computing paradigm, the dream is that it would excel in some demanding computational tasks such as understanding shapes in real-time or realtime smart sensing applications. It is clear that CMOS will remain the preferred choice for crunching numbers for traditional computing tasks. In this sense, the nanomagnets are not seen as a replacement technology, but more of an extra tool in the toolkit for some specific computational challenges.
“I do not see what we are looking at as being a general-purpose computer. I don’t see this as a computer to do everything. CMOS is here to stay, but our approach could
Scaling up with simulators
In dedicated clean rooms, innovative designs based on nanomagnet technology are taking shape but exploring a vast design space experimentally takes time. That is why an approach of creating simulators is paying dividends in the research.
“We have made simulators that are able to simulate much larger systems with more magnets than existing simulation tools. It’s important to be able to test the computational behaviour of the nanomagnetic systems, and there exist software packages that can simulate how one such magnet rotates its magnetisation or perhaps a hundred of them, but not a million of them in concert. Therefore, we needed to make our own computational software accelerated by graphical processor
There is a philosophical discussion about “what is computing?” at it’s heart. We strive to achieve computing tasks in new ways.A nanomagnetic ensemble under a magneto-optic Kerr microscope. Photograph by Aleksandr Kurenkov (ETHZ/PSI).
units (GPUs), which are very well suited for doing these kinds of calculations.”
Hardware development
The novel hardware requires a new understanding of what computers are and what computers can be. The biggest challenge is ensuring what is built is compatible with the kind of computational tasks required of it.
“The computer scientists say: ‘If I could have my dream material, how should it look in order to do computing?’ The most interesting developments happen at the intersection between computer science and material science, when computer scientists can give us input into how we need to change our materials to get the kind of behaviour they are searching for. This holistic approach is necessary and up until now there has been too little of this.”
To make SpinENGINE’s aspirations of new hardware possible it takes combined expertise and requires a willingness from those involved to share their knowledge in collaboration, working in unison as a truly multidisciplinary team of researchers. Team members have backgrounds in computer science, condensed matter physics, material science, computational modelling, and high-resolution microscopy. As part of the project, they are required to learn from each other to a level where they can understand each other’s needs and aims effectively.
Folven observes: “The most fascinating aspect of this project in practice is the way we have enabled computer scientists and physicists to work together. We are putting a lot of effort into this aspect; we meet on a weekly basis and we get the computer scientists to join us at facilities where we do physics experiments. And our physicists have joined our computer scientists to write code for our simulators. We are all learning each others’ languages, which is necessary for this project.”
Major technology firms are recognising how beneficial this kind of application could become for us all. Google and Intel invest in research into neuromorphic computing and try to make bespoke hardware for it, and SpinENGINE’s commercial partner is the computer giant, IBM.
Around the world, there are several pushes to make new hardware based on the principles of reservoir computing with nanoscale technologies.
“This is important research that the big players and academia need to take the lead on because there is a long road ahead, and this technology will take time to mature.
“For our part, we have shown that the reservoirs that we make are on the right track. We have done standardised tests which have been established in reservoir computing that show that if we are able to make all the bits and pieces work together, this material has the right properties. This is very promising, and it tells us we are sensible to continue down this line of research. Of course, it would be nice to have a complete system, but we are not there yet. It will take time, effort, and innovations.”
SpinENGINE
Harnessing the Emergent Properties of Nanomagnet Ensembles for Massively Parallel Data Analysis
Project Objectives
The SpinENGINE project aims to create a new approach to computing based on emergent properties, i.e., complex, non-linear behaviour arising from simple local interactions, in tunable ensembles of nanomagnets. The project will provide the foundation for radically new technology ideally suited to meet the challenges in our increasingly data-driven society.
Project Funding
The SpinEngine project has received funding from the European Union’s Horizon 2020 FET-Open programme under grant agreement No 861618.
Project Partners • NTNU • ETH Zurich • University of Sheffield
• Ghent University • IBM
Contact Details
Project Coordinator, Professor Erik Folven, Norwegian University of Science and Technology
Department of Electronic Systems
O.S. Bragstads Plass 2A 7491 Trondheim
Norway
T: +47 94437094
E: erik.folven@ntnu.no
W: https://spinengine.eu/
Erik Folven is a professor at Norwegian University of Science and Technology (NTNU). He is actively researching artificial spin systems and is currently exploring the potential for such systems as a computational substrate. At NTNU he is co-leading a research group where computer scientists and physicists work together towards new spinbased computational paradigms.
The Internet of Things (IoT) is becoming a reality, with more and more everyday devices and systems now capable of sending and receiving information via the internet. High-quality, precision sensors are central to many IoT applications, while they also play an important role in certain highend applications such as autonomous vehicles moving in tunnels or under water, which rely on sensors to determine their position. “If you don’t have access to external signals to confirm your position then you need very good inertial sensors, or else any errors will accumulate and send the vehicle off course,” outlines Dr Joanna Zielinska, a researcher in the Photonics Laboratory at ETH Zurich.
This topic is at the core of Dr Zielinska’s work in the EU-funded IQLev project, in which researchers are developing inertial sensing systems, specifically accelerometers to sense acceleration and gyroscopes to sense external rotation. “The project consortium includes four research teams, exploring different platforms for levitation and looking at how they can be used for inertial sensing,” she says.
Levitating sensors
High-performance inertial sensing requires excellent isolation from the environment, as well as high mass and precise readout. By levitating the sensor in vacuum, the researchers are able to effectively isolate it from the environment, reducing noise exposure. Normally noise limits sensor performance; levitating a sensor represents a highly effective, clean way of isolating it from the environment and avoiding this issue. “The sensor is levitated in vacuum, so it does not actually interact with anything,” explains Dr Zielinska. However,
Sensors that rise above the norm
the main challenge is increasing the mass of the levitated object while retaining excellent detection and precise control of its motion. The project partners are investigating different approaches to doing this, with Dr Zielinska and her colleagues at ETH focused primarily on optical levitation for gyroscope applications. “We use a focused laser beam to levitate and control a silica nanoparticle, which is about a micrometer in size and rotates a billion times per second,” she says. “This silica nanoparticle is attracted to the maximum optical field intensity, so it oscillates around the focus of this laser beam. We study the motion of these nanoparticles which we read out by analysing the properties of the light they scatter.”
The project’s research involves performing optical levitation in vacuum. Inside the experimental chamber, a high numerical aperture lens focuses the infrared laser beam used for levitation. The experimental apparatus also includes other optical elements, which collect the light scattered by the levitated particle and
measure it in an optimum way. “The collected information about the particle is electronically processed and fed back to the system, in order to control the nanoparticle’s motion,” says Dr Zielinska. This method has proved extremely effective, and has allowed the IQLev researchers to cool an optically levitated particle to its motional ground state (Ref 1), where its behaviour is governed by quantum mechanics. In the future this will lead to high-performance quantum-enhanced inertial sensors. In order to guide these future experimental efforts, researchers from University of Innsbruck have developed a theoretical formalism to understand how the electromagnetic field interacts with levitated particles of arbitrary size in the quantum regime (Ref 2).
The project’s agenda also includes research into magnetic levitation; the setup here is highly complex, as very low temperatures are required. “Our partners at the University of Vienna are using magnetic fields to levitate a nanoparticle (a superconducting lead-tin sphere) in a
The Internet of Things (IoT) is becoming a reality, and effective, reliable sensors are central to its ongoing development. Researchers in the IQLev project are investigating different levitation platforms, looking to develop a new type of high-performance inertial sensor, as Dr Joanna Zielinska explains.On-chip hybrid electrostaticoptical trap inside a vacuum chamber. The light is delivered in- and out of the chip via optical fibres. A silica nanoparticle trapped in a strongly focused infrared laser beam (not visible). The particle is visible thanks to additional illumination with green light.
cryostat, at a temperature of 15 millikelvin above absolute zero,” continues Dr Zielinska. “The nanoparticle is levitated using a magnetic field generated by coils. The experiment is isolated from mechanical vibrations and shielded from outside magnetic fields.”
A device called a SQUID (Superconducting QUantum Interference Device) is used to detect the magnetic field of the levitated sphere, thanks to which the researchers can monitor its motion.
The researchers at the ETH Nanophotonics Systems Laboratory (also involved in the project) are investigating optical levitation in combination with electrostatic levitation on a chip. “They have a very compact setup, which is largely contained in a nanofabricated chip. The light for trapping the particle is delivered to the chip with optical fibres, which also collect the scattered light. Additionally the chip provides an electrostatic safety net, which catches the particle if it escapes from the optical trap,” outlines Dr Zielinska.
Researchers in the project have been working to improve each of these levitation methods, although Dr Zielinska says each has their own drawbacks and advantages. “Optically levitation has a high detection efficiency, but only works for low masses. We cannot optically levitate particles which are larger than 20 micrometers, which limits the performance of an accelerometer,” she explains.
Potential applications
The aim of the project is to establish a proofof-principle rather than develop a specific product, although this work could be relevant to certain areas of industry in the future. While Dr Zielinska and her colleagues are working on fundamental science, they are also very much aware of wider possibilities. “This area is still highly attractive for us as researchers, because we are exploring new physics and new working parameters. It’s also attractive for our industrial partners, as they will learn more about the capabilities of levitated sensors,” she outlines. “These types of systems could potentially be applied in seismology as well as in satellite missions. We are also interested in the possibility of applying this technology to tackle future challenges, such as highend navigation and positioning, as well as gravimetry and seismometry.”
With optical levitation we use a focused laser beam to move and control a silica nanoparticle, which is about a micrometer in size and can rotate a billion times per second. The particle is attracted to the maximum optical field intensity, so it oscillates around the focus of this laser beam.
This is not the case with magnetic levitation, where it’s possible to work with particles that are orders of magnitude heavier, which means that they are better inertial sensors. The drawback here is that the technology is more complex, including the detection method. “Detecting particle motion using magnetic fields is much harder than using optical signals,” says Dr Zielinska. Over the course of the project researchers have been able to improve this detection capability, and Dr Zielinska says the performance of the magnetic levitation-based accelerometers is highly promising (Ref 3). “We’ve shown excellent performance of accelerometers, based on magnetic levitation,” she continues. “At ETH we are working on improving the performance of optically levitated gyroscopes by using extremely high rotational speeds to mitigate the sensitivity limitation due to low mass. However, we need to develop these systems further before we can look towards practical applications.”
This research is set to continue beyond the conclusion of the IQLev project, with Dr Zielinska planning to continue her work on optical levitation. “These levitated nanoparticles are among the fastest rotating objects on Earth, which translates into excellent rotation-sensing performance. In the future I hope to explore different applications of the gyroscopes based on levitated rotating particles, for example seismology” she says. This could be as part of a successor project to IQLev, although a future project wouldn’t have such a broad scope. “We have now learned what’s good for acceleration and what’s good for rotation sensing, so any subsequent project could be more narrowly focused,” continues Dr Zielinska. “More attention will be focused on developing products on the basis of this research, and our industrial partners will play a major part in that.”
IQLEV
Ground-breaking inertial sensing navigation for IoT devices
Project Objectives
The goal of the IQLev is to study different levitation platforms and explore their application as inertial sensors. Within the scope of this project, IQLev researchers are experimentally and theoretically advancing magnetic, optical and electrostatic levitation techniques and testing their performance as accelerometers and gyroscopes..
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 863132.
Project Partners
IQLev is represented by a consortium of world expert groups from academia and industry with different key competences. Each consortium member contributes significantly to the joint IQLev aim of establishing high-performance levitation-based inertial sensors. https://iqlev.ethz.ch/consortium.html
Contact Details
Dr. Joanna A. Zielińska, Postdoctoral Researcher Photonics Laboratory, ETH Zürich HPP M 24, Hönggerbergring 64 CH-8093 Zürich, Switzerland
T: +41 44 633 06 12
E: jzielinska@ethz.ch
W: www.photonics.ethz.ch
Ref 1: Magrini et al, Nature 595, 373–377 (2021); Tebbenjohanns et al Nature 595, 378–382 (2021)
Ref 2: Maurer et al, arXiv:2106.07975 [quant-ph] (2021)
Ref 3: Hofer et al, Phys. Rev. Lett. 131, 043603 (2023)
Joanna Zielinska
Joanna Zielinska received her PhD in 2018 at the Institute of Photonics Sciences (ICFO) in Barcelona for her research into quantum light generation. She subsequently undertook postdoctoral research focused on molecular physics at Imperial College London. She joined ETH Zurich in 2020 to study rotational degrees of freedom of optically levitated nanoparticles.
Magnetic levitation experiment in the cryogenic environment. The coils used to generate magnetic field are glued into a sapphire holder (green). Between the levitation experiments, the particle rests in the bowl visible between the coils.
Beneath the bonnet of online privacy
The GDPR regulation was designed to give internet users a greater degree of control over their personal data, yet not all websites and apps are fully compliant and many still have trackers embedded. The CSI-COP project aimed to heighten awareness of what information is being gathered about us online, as Dr Huma Shah and Professor Ian Marshall explain.
The General Data Protection Regulation (GDPR) came into force across the European Union in 2018, regulating the way that cookies and other digital trackers can be used on websites to gather personal data. The regulation was designed to give individual internet users a greater degree of control over their personal data, which is reflected in its core principles. “For example, under GDPR websites need to be transparent when personal data is going to be collected, and the internet user has to give their informed consent, so tracking shouldn’t be done by default. The reason behind collecting users’ personal
data also has to be clearly stated. What is the purpose?” outlines Dr Huma Shah, Assistant Professor in AI Ethics in the research centre for Computational Science and Mathematical Modelling at Coventry University. Despite this, some tracking cookies are still embedded in many websites, and internet users are not always notified. “For example, Google analytics and Facebook login are ubiquitous across different websites, such as some healthcare websites. So Facebook will know when you are using that healthcare website,” explains Dr Shah. “Web development environments or platforms could have trackers embedded.”
Citizen scientists (sitting: Lindsey Birnsteel -left; Sue Rowe - right) and Coventry University’s CSI-COP team (Professor Ian Marshall and Dr. Huma Shah standing, Matthias Pocs - sitting), and stakeholder, Dr. Matthew England checking permissions in mobile apps in CSI-COP main Dissemination event in Brussels 23 May 2023.
CSI-COP project
As the Science lead on the EU-funded CSI-COP project, Dr Shah is investigating the extent to which citizens are still being tracked online and looking at overall levels of compliance with GDPR. The wider aim in the project is to raise the level of scientific literacy amongst European citizens and heighten awareness of the privacy implications of unthinkingly accepting cookies when browsing the internet. “People may not really understand what they’re accepting when they accept cookies and what they’re giving away,” says Dr Shah. The project is tackling the issue by training citizen scientists to investigate
how cookies are used on different websites and apps, engaging the general public across each of the countries represented in the CSICOP consortium. “We’re an international consortium, with partners in eight countries, each of which has been looking to recruit a diverse group of citizens. In previous citizen science projects there have been relatively few women and young people, so we worked with youth organisations and women’s groups, aiming to recruit a balanced cohort from the general public,” continues Dr Shah.
The common characteristics among these citizens are that they all use the internet and share a concern about online privacy. An important initial step was to informally educate these citizen scientists through a short five-step course - ‘Your Right to Privacy Online’ - addressing key topics around online privacy. “What is privacy? What is personal data? How is our data harvested on the internet? What rights do we as citizens have? What free tools are available on the internet to help us manage our data?” outlines Dr Shah. Once they had completed the course, the interested citizens were asked if they would like to join the project and investigate the websites they visit and the apps they use, to find if they are clear about what cookies or tracking technologies exist beneath. The citizens recorded their investigations using a bespoke tool. “We didn’t want any thirdparty tracking. So we’re also showing our citizen scientists how to comply with GDPR,” says Dr Shah. “The citizen scientists then provided their opinion - was the cookie notice clear about what was underneath? Was the privacy policy transparent? Then citizens used the free website and app audit tools from the internet to essentially go beneath the bonnet of websites and apps to see what exactly was
under there. Our citizen scientists have so far looked at over 850 websites and 340 apps.”
CSI-COP have already produced a ‘taxonomy of cookies and trackers’ from the citizen scientists’ website and app investigations. This followed the project’s work on data cleaning and classifying the different types of cookies. Dr Shah says “For example, e-commerce sites have session
cookies to record what you put in the basket. Then there are persistent cookies, that are necessary for a website to work, while there will be others, like performance and analytics cookies. We went through the citizen scientists’ findings to build that classification.”
The way websites are monitored and analysed is evolving, and today most websites outsource the analytics. “People today want more information about what’s happening on their website, such as the dwell time, and which pages are more interesting than others. Historically, you used to do that by putting little bits in your code, but nowadays people just outsource it,” outlines Ian Marshall, PI in CSI-COP as well as Chief Operating Officer at Coventry University,
Behavioural information
A lot of website owners use Google analytics for example because it gives them behavioural information, such as which page on their website a user landed on and how long they stayed there.
The problem is that it’s not only the website owner which gets the information, but also Google or whatever other company may be doing the analytics, which may not be fully compliant with GDPR.
CSI-COP citizen scientists’ website and app investigations are available to look through from the project website, and from the Zenodo open-access platform:
Website investigations:
CSI-COP project site:
https://csi-cop.eu/project-results/citizen-scientists-website-investigations/
Zenodo:
https://zenodo.org/record/7472957
App investigations:
CSI-COP project site:
https://csi-cop.eu/project-results/citizen-scientists-app-investigations/
Zenodo:
https://zenodo.org/record/7472879
The document classifying cookies and trackers is available to download from the project website and from the Zenodo platform:
CSI-COP website:
https://csi-cop.eu/project-results/taxonomy-of-cookies-and-online-trackers/
Zenodo: https://zenodo.org/record/7801846
The citizen scientists and the CSI-COP researchers’ investigations were combined in the innovation of an open-access repository of online trackers. CSI-COP Repository was launched in the main project dissemination event in Brussels in May 2023 and is now freely available to search via the project website here: https://csi-cop.eu/repository/.
“The Austrian data protection authority has decided that Google analytics is not compliant with GDPR. The regulation is very strict about where the data of European citizens goes,” says Dr Shah. The bigger picture here is concern about the potential for companies to develop behavioural profiles of internet users by combining data from multiple sources. “When you start to build up a profile of the individual and identify behavioural traits, it moves from being marketing-led to behavioural-led,” says Professor Marshall. “When you start taking information from Facebook, Google and other apps, and combining them, you get the potential to behaviourally profile people and then influence them.”
Many people may be happy to accept cookies in certain circumstances, for
example where it allows companies to learn about their personal preferences for certain products, but the possibility of profiling individual behaviour marks a significant step beyond this. Many of us automatically accept cookies without fully understanding what’s being tracked, but there’s no disadvantage to taking a bit longer to dig deeper and then make a decision, believes Professor Marshall. “You can then look at what’s being tracked and whether you think it’s acceptable. Most websites then give you the option to switch off the marketing stuff,” he outlines.
The project offers a short MOOC (Massive Open Online Course) in English from the project website, which has been translated into twelve languages. The aim is to heighten awareness of these topics amongst the wider public through the MOOC and other events hosted by CSI-
COP partners across Europe and in Israel. “Over 600 people have completed the course in the available languages. We’ve held citizen science stakeholder cafes and parent/teacher roundtables, to increase reach,” says Dr Shah. “Some of the project’s citizen scientists have emerged as privacy champions were invited to attend CSI-COP’s main event in Brussels, in May 2023, in person or online to share their experience of web and app investigations with a variety of stakeholders.”
This can then help inform the regulation of data protection and online privacy as personal data gathering technology continues to evolve. While many companies have been responsible in terms of complying with GDPR, reducing the number of trackers and providing clear statements and policies, Professor Marshall says that others haven’t. “We still come across websites which don’t allow you to choose whether or not to accept cookies, or are designed in such a way that it’s easiest to click accept and any other option is difficult to find. We’ve also found that some of the tools we have been using to try and identify trackers are being blocked,” he outlines. There is currently no standardised way of writing cookie banners either, another topic which is being addressed in the project’s policy recommendations. “During some CSI-COP events we have been asking attending members of the public, including school teachers in school visits, to play a short cookie game which involves timing yourself in avoiding cookies. It brings up real cookie banners to show how confusing it is,” continues Dr Shah. “It can be difficult to avoid cookies, because the wording in a cookie banner or cookie notice can be ambiguous. You think you’ve rejected cookies, but actually you haven’t, so there needs to be a standardised way of writing cookie banners.”
PICASSO awards
The project’s work in highlighting these kinds of issues was recognised at the inaugural Picasso Privacy Awards in London in December 2022, where CSI-COP won in the Best Innovative Privacy Project category This recognised the project’s novel approach to investigating online privacy, which is an increasingly prominent issue. “Engaging citizen scientists was a way to do this in a bottom-up way,” explains Dr Shah. The aim of engaging the general public in this way was to inform citizens and enable them to make informed decisions about online privacy as they navigate the internet and deal with a rapidly changing technology landscape. “Technology has always accelerated beyond
society’s ability to understand and to govern it. We’re trying to educate as many people as we can reach and want to learn, so we can help them to understand the position we are currently in with respect to online privacy,” outlines Professor Marshall. “Technology companies, governments and other organisations will continue to use the internet to deliver more and more services in future, many of which will be automated. We want to help people understand cookies and equip them with the knowledge they need to make an informed judgment.”
A key step in this respect is standardising cookie banners, which is one of the main policy recommendations arising from the project’s work, alongside the standardisation
of privacy policies. Another recommendation is that websites on EU-funded projects should be privacy-by-design, so ensuring that they are compliant with GDPR. “There should be limited third-party tracking in these websites, and when there is, their purpose should be made transparent,” says Dr Shah. This is a highly dynamic field, and as a researcher in AI ethics and computer science, Dr Shah plans to continue working in this area beyond the conclusion of CSI-COP in August 2023. “There are data protection and privacy concerns from AI technologies, as well as issues around biases and misinformation,” she continues. “I’m currently preparing a proposal which carries forward the work that has been done in CSI-COP.”
CSI-COP Citizen Scientists Investigating Cookies and App GDPR compliance
Project Objectives
The GDPR regulation sets out the rules under which websites can gather data on users, yet it’s unclear whether all websites and apps are fully complying with the regulation.
The aim in the CSI-COP project is to project the extent of compliance with GDPR and highlight the extent to which some websites are still gathering personal data on users.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 873169.
Project Partners
https://csi-cop.eu/about/consortiumpartners/
Contact Details
Project Coordinator, Dr Huma Shah Research Centre for Computational Science and Mathematical Modelling Coventry University
Priory Street
Coventry CV1 5FB
United Kingdom
T: +44 24 7765 7688
E: ab7778@coventry.ac.uk
W: https://csi-cop.eu
Dr Huma Shah is an Assistant Professor in the Research Centre for Computational Science and Mathematical Modelling at Coventry University. She gained her PhD in 2011 and is a Member of the European Platform for Women Scientists.
Dr Huma ShahUnder GDPR websites and apps need to be transparent when personal data is going to be collected so the user is enabled to give their informed consent, so tracking shouldn’t be done by default. The reason behind collecting users’ personal data also has to be clearly stated. What is the purpose?
A new platform for eSports
eSports events are increasingly popular, with competitions attracting large audiences and generating billions of euros in revenue. Researchers in the COPA EUROPE Project are developing a distribution platform designed to help companies produce more engaging content and broaden the audience for sports and eSports events, as George Margetis explains.
The demand to watch traditional sports remains strong, while at the same time, the eSports market is growing rapidly, with competitions attracting large audiences and generating billions of euros in revenue. The COPA EUROPE project aims to help broadcasters and companies meet this demand for high-quality eSports content by developing a new distribution platform, designed to meet the needs of the creative industries. “We are developing Over The Top (OTT) media services. So producers, companies and broadcasters can broadcast sports and eSports content through
COPA EUROPE
COllaborative Platform for trAnsmedia storytelling and cross channel distribution of EUROPEan sport events
This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement No 957059.
George Margetis,Ph.D
Postdoctoral Researcher
Foundation for Research & TechnologyHellas (FORTH)
Institute of Computer Science (ICS)
Human-Computer Interaction Laboratory
N. Plastira 100, Vassilika Vouton
GR - 70013 Heraklion, Crete, Greece
T: +30 2810 391707
: g.margetis
W: https://www.ics.forth.gr/hci
George Margetis is a Postdoctoral Researcher at the HCI Laboratory of ICS–FORTH. His research interests include HumanCentered AI, X-Reality, natural interaction, intelligent user interfaces, and digital accessibility. He is a scientific and technical manager in numerous European, National and industry-funded R&D projects and has coauthored more than 80 scientific publications.
the platform,” outlines George Margetis, the Technical Manager of the COPA EUROPE project. The COPA EUROPE platform has far wider possibilities beyond this, with Margetis and his colleagues working towards two main objectives. “We aim to provide the means to create a vibrant community around sports and eSports, so that fans can exchange ideas and opinions. We want to give people – not just professionals – the opportunity to create content and earn money,” he says.
This is part of a general shift in the way that many people watch eSports, with a lot of viewers keen to learn more about the players, as well as comment on the events unfolding before them. This is something that Margetis and his colleagues aim to foster and encourage. “We aim to create more holistic transmedia experiences,” he says. The platform helps democratise production, giving more people the opportunity to get involved. “For example, we have developed a tool that allows people to draw graphics
Transmedia experience
A second major objective of the platform is to provide tools to create a transmedia experience, enabling users to create narratives using a variety of means, such as social media and company websites. The idea is to keep the audience engaged before, during and after the event. “We have built tools to acquire statistics about a game, which can be combined to provide deeper insights, using machine learning and AI. These insights can be sent out on social media, while these statistics are also very important for producers,” says Margetis. This part of the COPA EUROPE platform is designed primarily for the eSports industry, where producers may need to choose which of the players to highlight on a main screen, so it’s important to identify where the main point of interest is. “We have an application that highlights what each player is doing and what they have achieved, while we can also provide real-time statistics,” continues Margetis. “We also provide broadcasters with some historical statistics, helping them create a story.”
collaboratively. People can login to the platform and create a common space to start creating graphics, which can then be sold to professional companies. We use blockchain to deal with these transactions and to provide the appropriate safeguards,” explains Margetis. “We have also created a personalised recommender system. The novel point here is that we use the so-called federated learning approach, a technology which trains this recommender system in a distributed way, while at the same time safeguarding each individual’s personal data.”
The project has been primarily focused on research, yet at the same Margetis and the team are also considering the possible future exploitation of these technologies. Part of this involves partners in the project consortium – which includes several major companies – applying the different tools, while the COPA EUROPE consortium is also exploring the possibility of establishing a spin-off company to build on the project’s work. “The eSports market is developing rapidly, and our technology can be used to fill gaps in content provision,” Margetis says.
We aim to provide the means to create a vibrant community around sports and esports. We want to give people – not just professionals –the opportunity to create content and earn money.
Putting plasma in the cosmological picture
The majority of the visible matter in the universe, specifically plasma, is shaped by highly complex physical processes. This plasma is extremely hot and has a very low density in comparison to materials in our own atmosphere. “The typical density of materials in these cosmic structures is something like 100 atoms per cubic centimetre (cm3) while on Earth there are something like 1020 atoms per cm3 in the air. So it’s a big difference,” outlines Dr Klaus Dolag, Head of the Computational Centre for Particle and Astrophysics of the Excellence Cluster ORIGINS at the Ludwig-Maximilians-Universität (LMU) in Munich. This is a major issue in terms of our theoretical understanding, as researchers don’t know exactly how this plasma actually behaves on the micro-physical scale, a topic that Dr Dolag is addressing in COMPLEX, an ERC advanced research group based at LMU.
“Current simulations of galaxy clusters are typically based on certain, highly simplifying assumptions. The next step that we want to take within COMPLEX is to include plasma physics properties in the hydrodynamical simulations, and to see what changes. For example, what is the effect of viscosity? What is the effect of conductivity?” he asks.
COMPLEX Research Group
This is part of the wider aim of improving simulations of galaxy clusters and gaining fresh insights into the highly complex relationship between gravitational collapse and processes which lead to the formation of galaxies, which has been a central theme of Dr Dolag’s research career. Cosmic objects may collapse under the force of gravity, while at the same time the universe is expanding, which in a way act as opposing forces. “There’s a kind of battle between the expansion of the universe and gravitational collapse. Galaxy clusters capture both these effects,” says Dr Dolag. These objects encode a lot of information about cosmology and the evolution of matter in the universe, so improved simulations could lead to new insights into some major outstanding questions, such as the nature of dark matter. “These objects are very important if you want to learn more about what kind of universe we live in. Researchers are taking measurements, and they are trying to draw inferences from these objects about the cosmological background,” continues Dr Dolag. “When we simulate these objects, we want to reproduce their observed properties.”
Researchers first draw on knowledge about the initial state of the universe when simulating galaxy clusters. The cosmic microwave background has been observed to a high level of detail, so the initial conditions are fairly well understood, while sophisticated modern telescopes provide new images of the universe at different stages of its evolution, which can then be confronted with simulations. “The recently deployed James Webb Space Telescope (JWST) for example is used to observe very tiny parts of the universe, but at great depth. Images from the JWST challenge us; what are the physical processes by which galaxies formed at such an early stage? We can see interesting galaxies with certain properties,” outlines Dr Dolag. The Euclid telescope, which has recently been launched, will essentially map the entire sky, complementing the images from the JWST. “The Euclid telescope will measure the distribution of matter in the universe very precisely,” continues Dr Dolag.
A major challenge facing cosmologists is that the timescales associated with the evolution of cosmic structures are very long, and researchers only have access to static
PhD student
Ludwig Böss
Works on MHD Simulations of galaxy clusters and develops the FokkerPlanck solver to directly model spectral cosmic ray electrons and protons within cosmological, hydrodynamical simulations.
PhD student
Frederick Groth
Works on the implementation of new numerical methods for the hydro-dynamical solver in cosmological simulations to improve the treatment of turbulence in galaxy and galaxy cluster formation simulations.
PostDoc
Dr. Ildar Khabibullin
Expert in high energy astrophysics with rich experience in working on galaxy clusters, the interstellar and intergalactic medium as well as the Galactic center, X-ray binaries and supernova remnants.
PhD student
Tirso Marin-Gilabert
Works on MHD Simulations of turbulence within galaxy clusters and develops the treatment of viscosity within cosmological, hydro-dynamical simulations.
PhD student
Lucas Valenzuela
Works on kinematics of galaxies in cosmological simulations including tracer populations like globular clusters and planetary nebulae. For COMPLEX he further develops the web portal for sharing the outcome of hydro-dynamical, cosmological simulations.
Plasma accounts for the overwhelming majority of matter in the universe, yet its behaviour is not fully understood. Researchers in the COMPLEX group aim to include the plasma-physical effects in hydrodynamical simulations of galaxy clusters, which could lead to new insights into how these clusters form and evolve, as Dr Klaus Dolag explains.© Magneticum Box2b, K. Dolag
images. While it’s possible to observe a cosmic object at a certain point in a certain state, this represents just a snapshot, so researchers tend to bring together many observations to try and build a fuller picture. “We can observe many objects, hope that we see similar thing in different stages, then try to build a story or narrative out of that. Or we can try to learn how an object formed by observing different objects, which we believe to be fundamentally the same in principle, but which are thought to be in different stages of their evolution,” explains Dr Dolag. Previously researchers would try to learn about the physical processes affecting an object by comparing a statistical sample of observations to the results of simulations, but Dr Dolag says it’s now possible to go further. “We can now draw comparisons object by object. When we have a galaxy cluster in our simulation, representing an observed cluster, we can then do a much more detailed comparison,” he says.
Cosmic ray electrons and magnetic fields are fundamental parts of the plasma filling cosmic structures, and their interplay leads to very specific emission in radio wavebands, often tracing the current dynamics of the underlying structures. Our knowledge of these structures has grown dramatically over recent years thanks to the new generation of highly sophisticated radio telescopes, including the Low Frequency Array (LOFAR), which offers improved wavelength coverage and sensitivity. However, these observations still only show the tip of the iceberg when it comes to the complexity of cosmic structures, says Dr Dolag. “We can see clusters and galaxies, and we can infer that galaxies are typically connected in a diluted, kind of weblike structure, influencing their evolution but still to be detected in observations. But a lot of other effects may also influence our measurements,” he explains. The project will make an important contribution in this respect, developing new models which will help uncover how physical processes
shaped the visible matter in the universe; one important aspect of this work is improving numerical treatment methods. “That’s a computational part of our work. We need very precise methods to describe all the processes that we see. Therefore we have also developed improved hydrodynamic methods in COMPLEX,” says Dr Dolag.
Viscosity of plasma
T hese hydrodynamic methods are being applied to several different kinds of cosmological objects, while researchers are also working to include additional physical processes in models, building on Dr Dolag’s earlier work on the Magneticum pathfinder, a detailed hydrodynamical simulation of cosmic evolution. In one recently published paper, Dr Dolag and his colleagues have described their work in implementing a novel treatment of viscosity, which can be considered as a measure of a fluid’s resistance to flow. “Instead of assuming that plasma is a fluid with essentially zero
Visualization of the plasma with a simulated galaxy cluster, including the detected shocks, where the color indicates the strength of the shock-wave (blue-pink-green-yellow-white from week to strong). K. Dolag
viscosity, we can assign a specific viscosity and see what differences emerge,” he explains. This can then affect the structure of a galaxy cluster and the way galaxies within clusters evolve, a good example of the type of issue that Dr Dolag and his colleagues are investigating in COMPLEX. “In the formation of a cosmological structure different things fold in together. We can
of our local environment, although Dr Dolag says there are some restrictions due to computational power limitations, with the typical simulations nowadays something like 500 megaparsec (1 Megaparsec = 1 million parsecs or 3.26 million light years) in size. “We can simulate only a small region of the universe,” he acknowledges. The COMPLEX group is still working to develop new, more
COMPLEX
COsmological Magnetic fields and PLasma physics in EXtended structures
Project Objectives
The COMPLEX group will develop a numerical framework to perform, for the first time, simulations of galaxy clusters with high enough spatial resolution to resolve the important scales on which turbulence acts. It will develop the novel and detailed sub-grid models necessary to describe the evolution of magnetic fields, cosmic rays and associated transport processes.
Project Funding
This project has received funding from the European Research Council ERC Advanced Grant
Contact Details
Project Director, Dr. Klaus Dolag
Ludwig-Maximilians-Universität München Universitäts-Sternwarte München Scheinerstr. 1
D-81679 München
T: +49 89 2180 5994
E: dolag(at)usm.lmu.de
W: https://www.origins-cluster.de/en/newsevents/news/detail/erc-advanced-grantfuer-klaus-dolag
Böss et al. 2023, https://ui.adsabs.harvard.edu/ abs/2023MNRAS.519..548B/abstract
Groth et al., 2023 , https://ui.adsabs.harvard.edu/ abs/2023arXiv230103612G/abstract
Marin-Gilabert et al., https://ui.adsabs.harvard.edu/ abs/2022MNRAS.517.5971M/abstract
see that mixing and evolution inside of a cluster changes when we include viscosity. How much does that then change our global picture?” he outlines.
The COMPLEX group has also been involved in simulating the Local Universe, a cosmic neighbourhood which can be thought of as next door to the Milky Way, at least in cosmological terms. This demonstrates the ability to simulate a cluster representative
detailed simulations, with researchers now able to include magnetic fields and the treatment of high-energy particles, which represents significant progress. “We are among the very first groups able to simulate the radio emission of galaxy clusters to a high level of detail. This is because we have been able to include all the physical processes which are required to predict that,” says Dr Dolag.
Klaus Dolag is a staff member of the computational astrophysics section at Ludwig Maximillian University (LMU) Munich. He performed the world’s largest cosmological hydro-dynamical simulation earlier in his career, the Magneticum Pathfinder, and holds deep experience in numerical algorithms and code development.
Current simulations of galaxy clusters are typically based on certain, highly simplifying assumptions. The next step that we want to take within COMPLEX is to include plasma physics properties in simulations, and to see what changes.Density of the plasma (left) and of cosmic ray electrons (right) within a simulated galaxy cluster. L. Böss. Klaus Dolag The effect of viscosity on the temperature of the plasma within simulated galaxy cluster. The simulation on the right includes explicit treatment of viscosity. T. Marin-Gilabert.
Can we compute in memory with oscillators?
Memory and processing are entirely separate entities in the classical computing paradigm. Researchers in the NeurONN project are now working to develop a novel computing paradigm inspired by the human brain, which could lead to dramatic energy efficiency improvements and open up new possibilities in dealing with complex problems, as Prof. Aida Todri-Sanial explains.
The memory and processing units in modern computers are typically separate entities, so data needs to be fetched from the memory unit before it can be processed, which limits performance and energy efficiency. By contrast memory and computing are entangled in the human brain, in a unified neural network that can both store data and process it, which has inspired research into novel computing architectures. “Rather than having separate units, could we bring processing and memory together? Can we compute in memory?” asks Aida Todri-Sanial, a Professor in the Integrated Circuits Group at the Technical University of Eindhoven. This is an idea Prof. Todri-Sanial is exploring as part of her work in the EUbacked NeurONN project, in which she and her colleagues are developing oscillatory neural networks (ONN), inspired by the human brain. “Oscillators can emulate the
behaviour of neurons, which oscillate in time. We also know from neuroscience that when neurons synchronize in the frequency we can exploit the phase difference to store information,” she outlines.
NeurONN project
Researchers in the NeurONN project are now looking to apply these ideas in the development of an electronic system designed to mimic the human brain. In the NeurONN architecture, neurons - which send and receive chemical signals in the human brain - are represented by a vanadium dioxide (VO2) device, which Prof. Todri-Sanial says exhibits phase change behaviour. “This means that the device changes between insulating and metallic behaviour, so between high and low resistance states. These different states of resistance to an external capacitor makes the device oscillate. This is our main
oscillatory device for emulating neuron behaviour,” she explains. Memory is created through the coupling and synchronization of neurons; Prof. Todri-Sanial draws an analogy here with what happens when random metronomes start ticking. “Metronomes start randomly, but after some time they become coherent, they come to be in sync with each other. Similar behaviour has been observed in biological neural networks by neuroscientists working on memory creation. This is what we are trying to emulate with our coupled ONN architecture,” she says.
The role of synapses, which essentially act as the communication channel between neurons in the human brain, is performed by 2-dimensional memristors in the ONN architecture developed in the project. These memristors must be able to modulate the impedance, essentially the resistance to the current, between the devices. “Sometimes
Below: Simplified representation of an energy landscape for (A) a global interpretation, and (B) an interpretation in the case of pattern recognition.
you want very low resistance, sometimes you want high resistance, and sometimes you want somewhere in between. We have been developing memristors using molybdenum disulfide (MoS2) to couple neurons,” continues Prof. Todri-Sanial. MoS2 is a transition metal dichalcogenide, an atom-thin, 2-dimensional layered material, with properties that allow researchers to modulate impedance. “We have been able to achieve synaptic coupling between the VO2 oscillators, so it’s now possible to modulate the oscillators, to essentially change the dynamics between them,” outlines Prof. Todri-Sanial. “Sometimes they are weakly coupled, while at other times they are strongly coupled. This kind of modulation changes the phase dynamics between oscillators.”
This approach represents a marked shift from the conventional computing paradigm, in which the amplitude of signals is a key
consideration, towards a focus on their phase differences. This means looking at the phase relationship of signals, at whether they are in or out of phase for example, an approach which is called phase-based computing. “In classical computing, information is encoded in digital bits, represented by 0 and 1. Now, instead of 0 and 1, they can be represented by phase differences. We can encode a lot
more information in these phase differences,” says Prof. Todri-Sanial. The phase-based computing paradigm also promises to be significantly more energy efficient than classical computing, which is an important consideration in the project. Alongside the technical work in developing the NeurONN paradigm, Dr Todri-Sanial and her colleagues in the project have kept a close eye on its energy efficiency performance. “We conducted a very thorough comparison and benchmarking of our ONN paradigm versus what currently exists in neuromorphic computing ,,” she says.
Potential applications
Researchers have been able to show that the project’s ONN performs very well in terms of energy efficiency , which is an important issue with respect to potential future applications. One possible application is in image processing. “You can use non-linear dynamics of coupled oscillators to perform associative memory and image processing such as pattern recognition. This makes ONN interesting for edge AI applications,” says Prof. Todri-Sanial. Some of the applications developed include real-time pattern recognition and object detection, while Prof. Todri-Sanial and her team are also interested in exploiting ONNs to solve a currently intractable class of problems in computer science called the NP-hard problems. “There are 21 NP-hard problems that are particularly difficult to solve using the classical computing paradigm,” she says. “We are trying to solve some of them - we have started to look into max-cut, traveling salesman, and vertex colouring problems. This has also been a rewarding path to explore, to provide an
NeurONN
Two-Dimensional Oscillatory Neural Networks for Energy Efficient
Neuromorphic Computing | 2020 - 2023
Project Objectives
The NeurONN project aims to develop a new computing architecture inspired by the human brain, capable of dealing with problems in the AI field. The project brings together six partners from across Europe to develop a novel neuromorphic computing paradigm, which promises to be significantly more energy efficient than existing architectures.
Project Funding
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871501.
Project Partners
https://www.neuronn.eu/partners/
Contact Details
Project Coordinator, Prof. Aida Todri-Sanial Department of Electrical Engineering
P.O. Box 513 5600 MB Eindhoven
Netherlands
E: a.todri.sanial@tue.nl
W: https://www.neuronn.eu
W: https://aida-todri-sanial.com
W: https://www.tue.nl/en/research/ researchers/aida-todri-sanial/
https://ieeexplore.ieee.org/abstract/ document/10026654
https://hal.science/hal-03961010
https://phastrac.eu
alternative computing paradigm for these NP-hard problems.”
A number of learning algorithms have also been developed in the project, with the partners working to improve and refine the technology, looking to bring it closer to potential users. The ONNs have been deployed for very specific functions on a monitoring robot developed by project partner AI Mergence for example. “We utilised ONNs for obstacle avoidance, and also edge detection,” outlines Prof TodriSanial. As the project nears the conclusion of its funding term, researchers are still
The project team are also exploring the wider potential of their work, with researchers looking at possible applications in areas like autonomous vehicles and robotics. While NeurONN is set to finish in the Summer of 2023, a follow-up project called PHASTRAC has been established, in which Prof TodriSanial and her colleagues hope to build on the strong foundations that have been established. “We’re working with an industrial partner to see if some of these AI algorithms can be run in autonomous vehicles using ONNbased computing,” she outlines. If and when autonomous vehicles make it on to the roads
Aida Todri-Sanial is a Full Professor in the Electrical Engineering Department at Eindhoven University of Technology, Netherlands and Director of Research for the French National Council of Scientific Research (CNRS). Her research interests focus on emerging technologies and novel computing paradigms such as neuromorphic and quantum computing.
working to improve the technology and demonstrate its capabilities. “We hope to finalise a demonstrator, which involves the co-integration of the VO2 and MoS2 devices. We’re also looking to explore other types of memristor devices, as we’ve faced some challenges with MoS2,” says Prof Todri-Sanial. “We’re looking to implement ONN with VO2 devices, but to re-think these memristors, as the connection between them is a bottleneck in terms of implementation.”
more widely, rapid decision-making based on dynamical data will be required , and Prof. Todri-Sanial believes ONNs could be wellsuited to the task. “You clearly can’t afford to have a data centre with you in your car, so you would really want energy-efficient computing in the vehicle,” she points out. “That’s something we would like to explore with our partner and see how far we can take ONNs. Maybe they can be relevant for some of these applications in autonomous vehicles.”
Rather than having separate memory and processing units in a computer, can we bring them together? Can we compute in memory?