fet 09 Science beyond Fiction
Science
I01
21-23 April 2009 Prague
beyond
Fiction
T h e E u r o p ea a n Future e Technologiess Con n ference e
FET09
I02
THE TABLE OF CONTENTS Introduction ............................................................................................................................................. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Highlights ................................................................................................................................................. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Out of your body
04
Your sense of ownership over your own body may not be as strong as you think ....................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
08
06
Of mind and body How the brain outsources a certain kind of intelligence to the body ............................................................ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
Wired for movement Neuroscientists are using robots to test their ideas about evolution .......................................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
Riding the brain waves Small businesses are beneďŹ ting from Europe’s investment in high-risk research in ICT .......................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
How brains make music How the processes of making, hearing and responding to music are being increasingly understood from a cognitive perspective . . . . . . . . . . . . . . . . . . .
20
Say it with music Analysing the way musicians synchronise their body movements .............................................................. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
Mind over music How the Multimodal Brain Orchestra works ................................................................................................. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .
26
Spare my emotions: I’m a computer! Understanding the emotional context of the online world ............................................................................ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
Mathematics beyond logic Mathematics may require an element of uniquely human creativity ........................................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
The mathematics of human movement Finding hidden regularities and universal patterns in our daily movements ............................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
Virtual economies Probing the complex feedbacks that make economic reality so hard to predict .......................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
Trust for the global network New techniques to ensure the trustworthiness of mobile applications and services .................................. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
fet 09 Science beyond Fiction
I03
Penetrating vision Visualising large sets of data .......................................................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
Between two worlds Navigating the tricky terrain between different disciplines ............................ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
The next information revolution Exploiting the baffling paradoxes of the quantum world ................................ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
When Entanglement was Taboo Quantum physicist Anton Zeilinger recalls how he first came to grips with entanglement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
The DNA word processor A new and easy way to create and edit DNA sequences ................................. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
Cybernetics with DNA Computation may soon take a very different path by mimicking how nature computes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
Self-power for the nanomachines Building energy scavenging machines to harvest energy from the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
Steering Europe’s High-tech Future Interview with Mario Campolargo, Wolfgang Boch and Ales Fiala ................. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
The 21st century scientist The profound impact of ICT on the way scientists work ................................. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
The digital flywheel Multi-disciplinary ICT-led research will be key to success in a 2030 timeframe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
Is innovation sustainable? Has our dependency on innovation become a vicious cycle? .......................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
Twenty years pushing the horizon Fet is pathfinder for new research topics, placing Europe in a world-leading position in emerging fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
Flying the Flag for High-Risk Research The European Commission’s proposal for FET Flagships in emerging fields of ICT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
Moving Beyond Fiction How to push for visionary, high-risk research in ICT? .................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
Programme Programme Committee .................................................................................... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
I04
"An early look at what are likely to become tomorrow's science successes and groundbreaking technologies."
This quote from a conference delegate captures nicely the spirit of the first-ever Future and Emerging Technologies Conference and Exhibition, FET09! The theme of the conference, held over three days in Prague in April 2009, was "Science beyond Fiction". This theme helped to convey the idea that science can help to make possible what fiction could conceive. But the conference itself showed that science is capable of more than this, of bringing forth new ideas and technologies that even today´s fiction has not yet imagined possible. Beyond the many discussions around science and technology, FET09 was also an opportunity to discuss and explore new research practices and methodologies. It brought together young and established scientists from many disciplines and reached out to policy makers and other stakeholders in frontier research. The conference was an opportunity to reflect on past achievements, and to demonstrate and explain the value of this type of research to a wider audience.
This conference report has been drafted as a lasting testimony to the many exciting discussions and exchanges that took place in Prague. While it can never hope to convey the full scope of what was discussed, it is, we believe, a useful tool to navigate through some of the scientific ideas seen today as holding most promise for the technologies of tomorrow. We hope that the report manages in part to transmit the huge enthusiasm and excitement for the opportunities ahead that could be felt at the event itself. We wish you happy reading, and look forward to seeing you at the next FET Conference!
Aleš Fiala Wolfgang Boch European Commission
fet 09 Science beyond Fiction
I05
http://cordis.europa.eu/fp7/ict/programme/fet_en.html
Future and Emerging Technologies (FET) is funding frontier research based on a radically new visions of what can be done and grounded in scientifically valid ideas how to make major steps towards achieving such visions. FET acts as a pathfinder open to new ideas and opportunities, as they arise from within science or society. It aims to go beyond the conventional boundaries of Information and Communication Technology (ICT) and ventures into uncharted areas. FET funded projects increasingly rely on fresh synergies, crosspollination and convergence with different scientific disciplines and with the arts and humanities. This trans-disciplinary and high-risk research requires new attitudes and novel organisational models in research and education. The multidisciplinary creative process that is at the heart of future and emerging technologies is a constant challenge to conventional academic boundaries. Further details are on the FET website :
FET09 in Prague in April 2009 was co-organised by the European Commission´s Future and Emerging Technologies (FET) Funding Ccheme, the Czech Academy of Sciences and the Czech Technical University in Prague. More detailed information on FET09, beyond what could be covered in this report, can be found on the conference web site at ec.europa.eu/fet09.
I06
HIGHLIGHTS Point of view of the authors of this report
However impressed we are with the remarkable achievements of modern science and technology -- and the 2009 FET conference amply demonstrated a wide range of amazing advances, from neurofunctional materials to handheld quantum random number generators -- even more remarkable is how much we have yet to learn, and the surprising directions modern research is following in order to learn. Any lowly bacterium or other single-celled organism routinely accomplishes feats of signalling, control, sensing and navigation that go beyond what our own technology can do, even our most powerful supercomputers. Sophisticated plants and higher animals do much more. One of the most prominent themes running through the FET meeting was how important biology can be as a source of inspiration. This is evident in numerous FET research threads, including new technology that exploits the properties of DNA to build selforganizing nanoscale structures -- a tiny box with its own lid, opened by a DNA key, in one recent example -- and to carry out computing at the nano-level in an entirely new way. It's evident in the radical transformation of robotics from a ďŹ eld dominated
by visions of disembodied and platform-free intelligence to one intimately fascinated with the physical embodiment of intelligent agents, and the role such embodiment plays in producing adaptive intelligence in biology. It's evident as well in numerous avenues for designing bio-inspired sensors, motors, and other artefacts, as well as in the movement to incorporate human skills, especially our ability for pattern recognition, into systems for ďŹ nding meaning in vast quantities of data. Advanced technology for several decades seemed to be moving away from biology, building a world of plastic and steel, or semiconductor devices, all clean and having sharp geometrical forms. Yet science has been awakened again to nature's richness, and ICT now seems to be moving toward more messy biological forms. Future devices may be grown rather than be manufactured, be more "wet" and disordered than clean, and have the means, like organisms, to manage their own energy supplies and to heal their own problems. It is ironic that technology, often seen as set apart from nature and even acting against it, is now progressively advancing by learning to copy nature's own secrets. Mark Buchanan
fet 09 Science beyond Fiction
I07
Having watched EU research policy over many years, FET always seemed to me the Commission's crazy (but valid) ideas department, undertaking creditable science but not offering much returns in the short run. Consequently, policymakers, with their notoriously short time horizons, tended to keep their distance.
Anybody brought up on the futuristic comics of the second half of the 20th century could be forgiven for being disappointed by the progress of robotics. The promise was that by the 21st century we would have robots washing our cars, cooking our meals and manicuring our nails.
The landmark FET09 Conference in Prague swept away such perceptions and cast new light on this much-overlooked corner of ICT research activity. The ‘high-risk visionary research’ track undertaken by FET has consistently delivered inspirational and high-quality science. Its results have fed through into ‘mainstream’ ICT research in a number of areas. The fact that Europe leads the world in quantum computing, for instance, and is at the forefront of brain-computer interaction (BCI) is largely down to the support provided by FET. These successes could not have been foreseen at the time the decisions were made.
Instead, even our most advanced robots have trouble running, skipping and jumping, tasks that wouldn't trouble a five year-old. Where did it all go wrong?
Prague also showed that multi-disciplinary research has come of age. Technological progress is increasing exponentially (in part as a result of the enabling effects of ICT) and in science the pace of advance is now so rapid that the boundaries between disciplines are increasingly difficult to discern. Hence, multi-disciplinary capabilities are essential for European research to be world class. This has huge implications for how science is managed, for instance the priorities of funding bodies and the structure of scientific careers. To succeed through to 2030 and beyond, Europe has to invest more in this multidisciplinary ICT-led research. While it might feel as if we’re reaching saturation, in fact the digital revolution is accelerating rather than slowing down. The next wave will come not from ‘pure ICT’ but from exploiting cross-border innovations, such as BCI. Europe’s policymakers have recognised this and in Prague unveiled a new strategy that puts visionary high-risk research centre-stage in Europe’s ICT research policy. With this major commitment to Europe’s effort in this area, ICT researchers are assured of exciting times ahead. Mike Sharpe
The answer is that we've been doing robotics all wrong. The conventional approach is based on our thinking about our own bodies in which a single centralised brain controls every interaction and aspect of movement. But roboticists have found to their cost that this doesn't work. Centralised control systems simply cannot cope with the huge uncertainties that the real world throws up: the uneven ground, the unexpected steps and the changing lighting. Some other system of control is needed. This realisation has caused engineers, physicists and biologists to look a little more closely at the way we humans carry out complex tasks with our huge centralised brains. And what they've found is truly astounding. Much of what we do is not controlled directly by the brain after all. Instead our brain seems to outsource all kinds of intelligence to the material properties of our bones, muscles and ligaments, which constantly alter their shape and stiffness to cope with the changing conditions in the real world. This is a new kind of reckoning called morphological computation that roboticists now want to exploit in the design of the next generation of autonomous machines. Just how we are beginning to understand this socalled “embodied intelligence” was discussed at FET in the session of that name. Today the intersection between robotics, materials science and neurobiology is among the most exciting and surprising in science. These concepts are forcing us to re-examine not just our relationship with the world around us but what it means to be human. Even the science fiction writers of the 20th century would have been hard pressed to dream that up. Justin Mullins
I08
OUT OF YOUR BODY Your sense of ownership over your own body may not be as strong as you think.
Henrik Ehrsson experiments with a subject to produce an out-of-body sensation
Here's a question that philosophers have pondered for centuries: “How do I know this is my body? How do I know I'm not sitting in the next room or upstairs?� Despite their long attachment to the question, philosophers have little to offer in the way of answers. But in the last few years, neuroscientists have made dramatic leaps in understanding why people have a sense of ownership over their own bodies but not over other people's. They've gained this insight by learning to trick people into owning limbs that are patently not their own. In his keynote speech, the neuroscientist Henrik Ehrsson explained how he and colleagues at the Karolinska Institute in Stockholm, Sweden and elsewhere have carried out these tricks and what they've learnt about the way our brains work in the process. The idea of changing body ownership may seem alien to most people but it is a common problem among people who have had a stroke. Some
10 per cent of victims have trouble identifying their own limbs. Their experiences and other evidence have led neuroscientists to propose that individuals build up a model of body position in space by comparing the information they receive from various senses such as skin sensations, muscles and joints as well as the eyes. The brain combines these inputs to build a model of exactly where the limbs are. What Ehrsson and his colleagues have done is work out how to interfere with the way the brain builds its models and in doing so, to change the way individuals feel ownership over their limbs. Ehrsson began by attempting to fool the brain into thinking that it owned a rubber hand instead of the real thing. He set up an experiment in which subjects look at a rubber hand while their real hand is hidden from them behind a screen. With the subject watching the rubber hand, Ehrsson pokes it while prodding the realbut-hidden hand at exactly the same time.
fet 09 Science beyond Fiction
Henrik Ehrsson: “How do I know this is my body? How do I know I'm not sitting in the audience."
“At first there is no response from the participants. They're thinking: this is ridiculous—it's just a rubber hand,” he says. But after 15 seconds or so something happens in most patients' minds and they begin to feel that the rubber hand is theirs after all. The brain is somehow fooled by the coincidence of the sensation on their real hand and the visual evidence that the rubber hand is being touched. Owning a rubber hand produces a strange sensation, says Ehrsson: “If you move your finger, you are very surprised that the rubber hand doesn't move too.” To determine which parts of the brain are involved in ownership. Ehrsson has performed this experiment on patients in an FMRI machine that can see which parts of the brain are working during specific tasks. “In this way, you can find activity in the brain that reflects ownership,” he says. How strong is this sense of ownership? To find out, Ehrsson asked participants to watch while he threatened their rubber hand with a syringe. Sure enough, he found increased activity in the areas of the brain related to pain anticipation. And the greater the feeling of ownership, the greater this threat response, which is significant because this is an unconscious reaction that cannot be consciously controlled by the user. From experiments like these, neuroscientists now have a preliminary model of body ownership. The process of ownership begins with the brain processing visual and tactile signals in its primary sensory areas and then integrating these signals in a part of the brain called the posterior parietal cortex. The motor areas of the brain then help to calibrate the sense of position. Finally, the part of the brain called the premotor cortex recognises a match between the visual, tactile and positional signals which triggers a sense of ownership in the subject. That, in turn, causes changes in other brain systems such as the emotions the subject associates with a limb, which are responsible for producing the threat response.
I09
That's all very well for a hand but does it work for full body ownership? Ehrsson has begun to investigate this question by getting subject's to observe themselves from a different perspective. "Ideally we'd like to surgically remove people's eyes and move them around the room," he jokes. Instead he uses a stereo camera placed elsewhere in the room that feeds images to a headmounted display that subject's wear to give them the illusion that they have a different perspective. Subjects report that the sensation of looking at themselves from a different perspective is weird but not like an out-of-body experience. The visual sense alone cannot trigger such a change. The brain needs a second sense to confirm what the eyes are seeing. Ehrsson provides this confirmation by jabbing an object towards and below the stereo camera while at the same time jabbing the subject’s chest. After a few seconds, this combination of a visual change in perspective and the sensation on the subject's chest begins to induce an out of body experience. The same technique can even give people a sense that they own an entirely different body. In this case, Ehrsson places the stereo camera on the head of a manikin, looking down towards its chest and feet. The subject sees the manikin's body through the headmounted display but again, the view by itself does not induce the feeling of ownership over the manikin's body. But if Ehrsson pokes the manikin’s torso while at the same time prodding the subject’s body, then in a few seconds most people's brains are fooled. A man can take on the ownership of a female body and vice versa.
I010
Ehrsson now wants to know how far the sense of body ownership can be stretched. “Can we take any object such as a car or a table and fool your brain into thinking that it's your body?” he asks. He and his team have tried replacing a manikin with a table in their experiments but a sense of ownership does not emerge. It seems that the brain must own a human-shaped body. “You could probably become a chimp but not a lobster," he says. This kind of work is more than an entertaining illusion. Ehrsson says it could be useful for better understanding of various neurological disorders, such as schizophrenia, in which there appears to be a breakdown of body ownership in some patients. It could also give virtual reality users a sense of ownership over the avatars that represent them in online worlds and to better control telerobotic machines.
But the most immediate application is to give amputees a sense of ownership of a prosthetic limb. The difficulty here is that to produce a sense of ownership, the real limb has to be touched at the same time as the artificial limb: an obvious problem for amputees. However, many amputees have a sense of ownership of a limb that isn’t there, a so-called phantom limb. These people often experience the sensation that their phantom limb is being touched when parts of their stump are stimulated. Ehrsson has begun a project in which he has been able to map the phantom limb to the stump and then to carry out his limb ownership experiments by touching the relevant parts of this map. “In our tests so far, we've found this can be hugely useful to amputees."
fet 09 Science beyond Fiction
I011
BLENDED REALITY AN EXPERIMENTAL MIXED REALITY GAME ALLOWS PLAYERS TO EXPLORE THE PAST, PRESENT AND FUTURE OF THEIR CITY If you happen to be walking through the central Old Town in Cologne and come across a number of researchers helping a spaceship to land, don't be alarmed. Even if you can't see the spaceship yourself. You've stumbled across researchers from the Fraunhofer Institute for Applied Information Technology near Bonn, playing a game called TimeWarp. This is a mixed reality game in which the players’ positions within the city are tracked using a GPS receiver and inertial sensor. A portable computer then gives them additional information about the environment they are in. In TimeWarp, the players can travel in time by physically walking through virtual time portals that the computer superimposes on a display of the city. The portals lead to Roman times, the Middle Ages and Renaissance as well as in to the future. The game involves hunting for elves in these various periods and bringing them home. Information from a virtual world superimposed on a real scene
The game is a testbed for mixed reality technologies that combine information from the real world with additional information from a virtual world. The team behind the project is investigating what kind of challenges the technology comes up against when you used in an urban environment. It is also exploring the kinds of features that make a mobile mixed reality game successful. The game is just one part of a project called IPCity that aims to develop the technologies that will enable city dwellers to explore the past, present and future of their environments and to discover new aspects of the places in which they live.
OF MIND AND BODY How the brain outsources a certain kind of intelligence to the body.
The growing evidence that the brain outsources a certain kind of intelligence to the body could have huge implications for robot design.
Imagine the simple act of jumping off a wall and landing on the ground. In that moment, your body exercises an extraordinary degree of control over your movement as the muscles and ligaments around your knee absorb the motion with a rapid damped oscillation which takes place in a matter of milliseconds. But the curious thing about this action is that it happens about a thousand times faster than any neural circuit could cope with. That raises an interesting question: if the action takes place at this speed, then neither your brain nor any part of your central nervous system could be involved in the process. What, then, is in control? A growing number of physicists, engineers and roboticists believe the answer lies in the nature of our bodies and the way they interact with the environment. Led by Rolf Pfeifer at the Artificial Intelligence Laboratory at the University of Zurich, they say our bodies have an inherent “intelligence” that allows them to deal with these circumstances without the need for any interference from the brain. The idea is that the materials in the knee and the way they work together embody a kind of intelligence that ultimately controls the movement. Roboticists have been quick to pounce on “embodied intelligence” as the panacea that can solve the hugely difficult problem of controlling advanced robots. The idea is that the future of robotic control is not centralised processing power but a kind of distributed intelligence in which well chosen materials and clever design do the work instead. The hope is that embodied intelligence will lead to a new generation of robots that can handle the uncertainties of real environments as easily as living organisms do now.
That will be a huge shift from the way robots have been designed in the past. At the FET09 session on embodied intelligence, Pfeifer pointed out that industrial robots have long helped build everything from cars to microchips but operate in environments in which almost every element of uncertainty has been removed. The tasks they perform are the same. They repeat their patterns of movement continuously in an environment in which every potential variable—from the temperature to the lighting--is carefully controlled. In these kinds of conditions of total certainty, it makes sense to have a centralised control processor which monitors every joint, sensor and actuator and uses this information to make straightforward decisions about what to do next. But in the real world, uncertainty dominates and this requires an entirely different kind of control strategy. Robot engineers have found to their cost that in these circumstances, centralised control makes no sense. Even the simple act of jumping off a wall is beyond the ability of the most advanced robots today. So how do the organisms around us cope? One clue comes from the study of insect locomotion. Scientists have long known that the neural circuitry underlying insect gait has little central control. This puzzled them for many years until they began to realise that perhaps central control was not necessary. Instead, imagine what happens when one of six insect legs begins to push. As the force it applies increases, the other legs begin to bend, changing the angle of the leg joints. All that an insect need to do to determine the state of
I012
fet 09 Science beyond Fiction
Kenji Suzuki: "The nature of the human mind is largely determined by the form of the human body."
the other legs is measure the angle of the leg joints. It turns out, that insects have just these kinds of angle sensors which feed data into the wiring that controls gait. The information from these sensors provides a simple feedback control mechanism for an otherwise mechanical sixlegged gait. It is with this kind of “embodied intelligence” that insects manage their locomotion over hugely varied terrain with little central control. An even more sophisticated phenomenon occurs in humans. Instead of controlling the behaviour of the leg muscles during a jump, the stiffness of the muscles dynamically changes to absorb the impact. We've long known that. What's new is the understanding that this happens with little input from the brain which effectively outsources control to the morphology of the leg-- its centre of mass, the length of its bones in the shape of its foot as well as with material properties of the muscles and ligaments. In a sense, these materials together perform a kind of computation to control what is going on. The new breed of robotics engineers call this “morphological computation”. The implications of this kind of thinking are profound. Accept the existence of morphological computation and you're forced to accept that a certain kind of intelligence lies outside the brain. And if that's the case, then it's just a short step to accepting that cognition comes not just from the brain, but that it emerges from the relationship between the brain and body and the environment. That explains one mystery that neuroscientists have puzzled over in recent years: why it's almost impossible to determine the function of neural circuitry without looking at the the environment in which it is operating. “We have to know how the brain is embedded into the physical system, and the morphological and material properties of this system,” says Pfeifer.
I013
For example, Pfeifer talks about a mechanical fish that can do nothing more than wiggle its tail in various ways. It's tempting to think that such a fish would have very limited ability to navigate in a three dimensional tank without control over another degree of freedom, such as its buoyancy. And yet this fish can reach any point in a three-dimensional space within a tank. Here's how it works. The fish is designed so that it leans to one side when wiggling its tail and this causes it to move in an upward spiral motion. At the same time, the fish is naturally slightly negatively buoyant so that if it does nothing, it will sink. Any combination of the two movements: moving upwards in a spiral or straight down, allows it to reach any point in the tank. This design is all it needs to reach any point in the tank but it would be hard to determine this without a detailed knowledge of the fish and its environment. What's clear from this example is that clever design which exploits embodied intelligence is going to be crucial for the next generation of robots. These designs are going to work better if they exploit the laws of physics in clever and imaginative ways. For the most part, we know how these laws work so why not put them to good use. As Pfeifer puts it: “The design may be hard but the physics is for free.” But beyond that the physical interaction induces patterns of sensory stimulation that turn out to be crucial. For example, the act of grasping a mug stimulates the visual system watching the action, the proprioceptive system which monitors the position of the limbs in space as well as the temperature and pressure sensors in the hand. Pfeifer says the correlation between these patterns during such an action make the signal very easy to process. “This demonstrates the tight interaction of the physical interaction and the information processing of the brain,” he says. And it is this interaction that lies at the heart of the notion of embodiment.
I014
WIRED FOR MOVEMENT Neuroscientists are using robots to test their ideas about evolution.
Inside the exhibition hall, amid the robots, virtual reality demonstrations and 21st century games, a small group gathers to watch an extraordinary demonstration. At the centre of the group, a researcher holds what appears to be a snake-like toy, made of a series of articulated segments with a head at one end and a tail at the other. This is no toy but a robot designed to swim like a lamprey. As if on cue, it begins to writhe in its owner’s hands to the general amusement of the crowd. On land it seems rather clunky. But a video shows how the robot behaves in water, gliding through the liquid with a lamprey’s characteristic wriggle. Another video nearby shows a robotic salamander swimming through the water like a lamprey and then walking out on to the beach using its legs. Robots that can swim and walk are impressive but there is more to these ones than the ability to mimic animal movement. These robots are test beds for a new kind of neuroscience. The group that built them are testing ideas about robot locomotion by simulating the kind of neural wiring found in animals. If evolution has already worked out how to walk, why not exploit it. This neuroscientific approach has another benefit. By trying out different combinations of wiring and actuators, researchers can study how neural circuits must have changed as living things evolved from swimmers to walkers. These robots are laboratories of evolution. Salamander robot from the Lampetra project
fet 09 Science beyond Fiction
Evolutionary biologists have long been interested in creatures such as the lamprey and the salamander because they represent closely related steps on the evolutionary ladder. The swimming action of the lamprey is clearly the forerunner of the walking motion of the salamander, which in turn preceded the gait of mammals from which humans evolved. If evolution works how we think it does, there must be something of the lamprey's gait in us all. The hope is that robo-neuroscience can tease apart how one led to the other. Researchers know that a salamander’s gait is controlled by neural wiring that stretches the length of the spinal cord and that this generates a pattern of signals determining how its body moves. But one puzzle has been how this simple central pattern generator can control both swimming and walking. In 2007, scientists from the Ecole Polytechnique Fédérale de Lausanne in Switzerland built the salamander robot in the video to study exactly this question. The robot and the work that stemmed from it generated a number of important insights into salamander locomotion. The team knew that the frequency of movements that the salamander uses for swimming are higher than those it uses for walking. So the robot was specifically designed so that the oscillations of its body during swimming were faster than the oscillations of the limb actuators that it used for walking.
I015
This turned out to be crucial. What the central pattern generator does is produce a signal of a specific frequency. When that frequency is low the limb actuators automatically respond and begin to walk. But when the frequency is high, the limb actuators cannot keep up and stop. Instead, the body oscillates producing the salamander's characteristic swimming action . So the central pattern generator can control both swimming and walking by a simple change in output frequency. To cap off the research, the team was able to find biological data from real salamanders supporting the idea that the limbs respond to oscillations at a lower frequency than the body. This work has important implications for evolution. One important question is how aquatic animals were able to make the transition to terrestrial ones. How could a fish walk? The robotic salamander shows how the central pattern generator for a walking animal can easily be constructed from the pattern generators of a swimmer. In evolutionary terms, it's just a small step from swimming to walking, if you're a lamprey. But the collaboration between neuroscience and robotics goes both ways, says Paolo Dario, a mechanical engineer and robotics expert at the Scuola Superiore Sant’Anna in Pisa, Italy. Initially, the robots are used to implement theoretical models of neural mechanisms. Then, the results of these experiments are used to fine tune the models and also to help redesign the robots for real-world applications.
I016
The robotic lamprey, for example, is a direct mechanical descendant of the salamander, (in a topsy-turvy inversion of evolution). It is built with materials and actuators that behave more like those in the real animals and it has more advanced on-board processing that can crunch the data from various built-in sensors. It is designed to take locomotion to a new level by using the input from sensors for tasks such as obstacle avoidance. Dario has studied the locomotion of many living things and how robots can mimic it. For example, he has built robots to study the role of friction in the locomotion of earthworms. And he has used the results to redesign endoscopes that push themselves through the intestines like an earthworm. The hope is that these machines will allow pain-free colonoscopies, he says. He also hopes to exploit the locomotive techniques of insects with legs to build wireless endoscopic capsules that clamber through the digestive system, taking pictures and samples, and broadcasting the results to an external receiver. “This is a very attractive area discovering the basic principles underlying the functioning of living beings," Dario told during the session on embodied intelligence. The relationship between humans, machines and computers is perhaps most advanced in Japan, where the country's ageing population is driving the development of robots that can enhance human performance. Kenji Suzuki at the University of Tsukuba in Japan is developing the technology that can use human neural signals to control wearable machines that walk, lift and grasp. These machines can have a profound effect on humanity. "The nature of the human mind is largely determined by the form of the human body," says Suzuki. So enhancing the body's capability with powerful, intelligent machines could have a signiďŹ cant inuence on the mind. At his lab, he is already using wearable machines that use neural signals to enhance the walking movement of muscular dystrophy patients, stroke victims and people with lower limb paralysis.
Rat whiskers are an inspiration for new forms of sensing
fet 09 Science beyond Fiction
I017
NEW FRONTIERS FOR IST, ROBOTICS AND COGNITIVE NEUROSCIENCE The brains of living creatures are capable of performing highly complicated tasks: they show characteristics that are highly desirable for artificial systems, such as adaptability, learning, or intelligent behaviour in changing environments under uncertainty. How can robotics and cognitive neuroscience merge to provide new insights into brain function and into potentially new developments in information and communication technologies or robotics technology? Alain Berthoz of Collège de France addressed this question and presented recent research on higher brain functions, discoveries on the neural basis of emotion, and new developments in brain recordings that pave the way for new types of interdisciplinary research. He showed how we currently witness an explosion of a new field that is at the intersection of IST, robotics and neuroscience that is already being compared to a Gallilean revolution that will change radically our current perspective. This revolution carries with it a new science as well as new technologies, which we are just beginning to perceive. Brain-computer and brain-robot interfacing is a challenge which will be fruitful both for neuroscience and for robotics. The new look at the brain as a real world agent will lead to fundamentally new principles for designing robots. It requires neuroscientists to investigate not so much isolated brain processes but the functional brain in relation to the living body and its actions in the real world. Ambitious multidisciplinary approaches are needed to meet the challenges of modelling agents that navigate real world obstacles and that exhibit realistic capabilities of anticipating and predicting the real world implications of their planned actions. This implies not only actions in the physical world but also social interactions and their affective dimension which are equally important for agents capable of interacting with others, react to significant emotional messages and engage in joint action. With the notion that unlike a computer the brain predicts functions as a probabilistic machine, exhibiting task-dependent and dynamic synchronisation and coherence of neuronal assemblies over different brain regions, novel avenues for interdisciplinarity are now open. This Bayesian approach to understanding functional brain processes whereby a priori knowledge influences perception and action prediction is central for cognitive skills is a much more promising one for novel ICT architectures.
These kinds of developments raise the question of what the ultimate robot would be like. During the session on the ultimate robot, Tom Ziemke, at the University of Skövde in Sweden, pointed out that typical robots today are still built with a mechanical body and a computational mind. But it looks as if this will have to change. He points to a symbiotic robot called EcoBot built by the Bristol Robotics Laboratory in the UK that has its own artificial metabolism made from microbial fuel cells. The next stage of this project is to programme the robot to regulate its own internal environment, in the same way as living things. So the robot would control its temperature, how much water it drinks and the nutrients it adds to its bioreactor to achieve a balanced control of metabolism, a process called homeostasis. What may emerge from this, is a robot that experiences the need to perform certain actions at certain times. In humans, he says, we call this motivation. That could be hugely significant, says Ziemke. He believes that real emotion, cognition and self-consciousness are based on the multiple levels of homeostatic bioregulation that living systems have to undergo. If he is right and these factors turn out to be the basis of emotion and cognition, then the laboratories for testing these ideas will be the robots of the not too distant future.
RIDING THE BRAIN WAVES Small businesses are benefiting from Europe’s investment in high-risk research in ICT.
“The call came out of the blue. At the time we didn’t know anything about FET or European Commission projects.” So says Christoph Guger, Managing Director of g.tec, an Austrian SME, recalling how he first got involved in European research programmes.
g.tec won an Austrian research grant in 2001 and soon after came the call from UCL. The company has never looked back and has now sold its systems in 55 countries. “Japan and the United States are our main markets”, explains Mr Guger, “followed by Europe.”
The caller, to Christoph’s office in January 2002, was Mel Slater, a researcher at University College London. “Would g.tec like to participate in a European project looking at brain-computer interaction?”, Slater wanted to know. Mr Guger said yes and now, seven years later, the company is involved in five European projects which are yielding a rich stream of results for his fast-growing business.
The key to g.tec’s success is that it’s bio-recording system can be customised. “We have a series of standard products but have made them flexible so they can be adapted quickly”, explains Mr Guger. g.tec’s main customers are neuro-scientists in universities, hospitals or medical schools. This is specialised and expensive equipment, so the sales process can take a long time.
The largest of these is Presencia, a project that aims to combine brain-computer technology with virtual reality. In another project, called Smarthome4All, g.tec and its European partners are looking at how to use brain-computer interaction (BCI) systems to control smart homes. “We believe it will be possible to use brain signals to do everyday tasks, such as switch on a TV or open doors”, explains Guger. “This will be especially useful for people with disabilities”. Another of g.tec’s European projects is developing a system to help in the rehabilitation of patients who have had strokes. FET: a breeding ground for business success g.tec was formed in 1999 when Christoph Guger and his business partner were studying for their doctorates at the Technical University in Graz. Their research involved the development of an amplifier for bio-signals coming from the brain. It turned out to be just what neuro-scientists needed and the orders began to flow in, the first from Oxford University, UK.
New developments are underway, such as a spike recording system that will allow clear signals to be recorded from deep inside an animal’s brain, opening up a new field of applications. One of the main benefits from g.tec’s involvement in FET has been access to specialist facilities within universities for testing its products. The company has used virtual reality systems, known as CAVEs, at Barcelona and UCL to test its brain sensors. “It’s difficult to decide what features a project should have without scientific applications”, Christoph Guger explains, “so it’s essential to have test beds for equipment. Ten years ago a CAVE system cost almost €1m. It’s impossible for an SME to buy that sort of technology.” Another key outcome has been contribution to scientific publications, which can be difficult without high quality partners. They provide recognition and visibility within the target market.
I018
fet 09 Science beyond Fiction
Reaping the rewards The company’s success has brought wide recognition. It won its first prize, the Austrian prize for most innovative company, in 2001 with just three employees. Since then it has received a series of awards, including the European ICT Prize 2007, the Fast Forward Award 2008, and the Econovius 2009 - Austria’s highest prize for innovative companies. “One prize could be a mistake, but several lead to visibility and prestige which is important for small companies”, says Mr Guger. Following the participation in the exhibition of FET09, g.tech has been featured on Germany’s RTL channel and also a live show on SternTV. “It’s incredible for an Austrian to be watched by 60 million people – in my country there are only 8 million in total!”, Guger exclaimed. Reaching for the stars Another visionary SME to have benefited from FET backing, operating in a similar field, is Starlab. The Barcelona-based company’s first product is Enobio, a wireless electrophysiology system with applications in the study of epilepsy and sleep disorders. For instance, it allows sleep researchers to study patients in the comfort of their own homes and then upload the data to the doctor for analysis. Another application is biometry – using brain signals to recognise people.
I019
“Brain monitoring is the next frontier”, says Starlab’s Business Development Manager, Ana Maiques. “FET allows us to address real-world problems within a highly stimulating multi-disciplinary environment. While the research is long-term, entrepreneurs can often see short-term applications which they can spin-off into products.” Starlab is currently involved in two FET projects. HIVE (Hyper Interaction Viability Experiments) aims to develop and test new technologies – based on non-invasive brain stimulation - for delivering information into the brain. This has a very long-term vision, aiming to lay the foundations for computers to be able to interact directly with the human brain. Another project, PEACH, is bringing together researchers from diverse backgrounds around the study of presence – making virtual systems indistinguishable from physical reality. “FET has opened up important opportunities for us”, concludes g.tec’s Guger. “It allows us to have crazy ideas and to experiment”. His advice to other business owners is give it a go: “It’s definitely worthwhile, provided you can bring the crazy research down to a product that you can sell.”
HOW BRAINS MAKE MUSIC
How the processes of making, hearing and responding to music are being increasingly understood from a cognitive perspective.
‘Music is auditory cheesecake, an exquisite confection crafted to tickle the sensitive spots of at least six of our mental faculties’, claimed cognitive scientist Steven Pinker in his 1997 book How the Mind Works. He went on: Compared with language, vision, social reasoning, and physical know-how,
Pinker’s remarks provoked outrage. Some felt that he was relegating music to the status of a parasite free-riding on mental functions developed for much more ‘important’ tasks. It seemed as though the very dignity and value of music itself was at stake, and Pinker’s view that music-making is pure hedonism led some researchers to insist that, on the contrary, it exists for fundamental evolutionary reasons: that music has a ‘survival value’ which humans cannot do without.
music could vanish from our species and the rest of our lifestyle would be virtually unchanged. Music appears to be a pure pleasure technology, a cocktail of recreational drugs that we ingest through the ear to stimulate a mass of pleasure circuits at once.
That argument continues to rage. Yet Pinker’s comments highlighted how little we understand about music’s origins. His suggestion that music is strictly optional seems hard to square with the observation that all cultures seem to possess it (even ones that do not have a written language). It may be, however, that ultimately attempts to resolve the debate will hinge on being able to figure out how music is made in the brain, and specifically whether our brains have specialized modules for processing music and nothing else. Interest in the question of how brains make music has blossomed in the past decade or so, as attested by the success of recent popular books such as Dan Levitin’s This Is Your Brain on Music and Oliver Sacks’ Musicophilia. Psychological studies of how humans respond to musical stimuli have been pursued for many decades, but now they are being supplemented by insights from new brain-imaging technologies such as functional magnetic resonance imaging and positron-emission tomography (PET scanning). Such methods enable direct monitoring of the parts of the brain that are activated by hearing or playing music, making it possible to investigate how, for example, the processing of primarily acoustic properties such as pitch and rhythm interact with semantic and emotional centres.
I020
fet 09 Science beyond Fiction
These efforts still leave us with a very incomplete picture, and moreover one that remains largely biased towards the types and roles of music prevalent in Western culture. But we are slowly closing in on the important questions, and in the session on ‘Music and the Brain’, four speakers discussed how the processes of making, hearing and responding to music are being increasingly understood from a cognitive perspective. It has become clear that, as well as helping us understand music per se, studies of music cognition provide a unique window into the brain more generally. No other human activity seems to stimulate simultaneously so many different brain functions, and our neural responses to music may offer insights into the way the brain integrates any complex stimuli. There may also be valuable information to be gleaned about other brain functions such as language and visual processing or the operation of memory and motor functions. We may come to learn why music possesses powerful therapeutic value for people with neurodegenerative conditions or other brain dysfunctions.
I021
frequencies of notes in scales and modes, for example. So a lot of our response to music depends on culturally specific learning. a b
c
Philip Ball argues that, while it is simply not known whether music has an intrinsic adaptive function in evolution or whether it is merely parasitic on other adaptive functions of auditory processing, there is good reason to believe that we are intrinsically musical beings. Music, he claims, cannot be eliminated from our cultures without changing our brains. “Music does not somehow emerge from acoustic physics based on relationships between sound frequencies”, says Ball. “It is instead a consequence of our phenomenal capacity for spotting patterns among events.” One of the difficulties in identifying how the brain processes music is that there are therefore few if any ‘universals’ in music – different cultures chose different (and sometimes variable) ratios between the acoustic
Scales and modes from different cultural traditions. a, Western diatonic scales. b, Some North Indian modes (arrows indicate tunings shifted from those implied by standard Western notation). c, The frequency divisions of the octave in two Javanese modes, compared with the Western diatonic major scale.
I022
All the same, says Ball, there are rules to the way music is composed and heard. And “one of the most important of them”, he says, “is that the rules must be continually broken.” Ball stresses that making and listening to music is not a specialized activity that demands a great deal of training. We acquire many of the skills passively and unconsciously, simply by experiencing music in our daily environment. “Almost anyone can learn to appreciate music, and almost everyone can learn to perform it”, he says. “Most of us are in a sense already musical ‘experts’ without knowing it.”
convey. Their broader goal is to obtain insights into what Camurri calls “social creativity”: the ways in which groups of people engage in and embody a creative activity with empathy and emotion. One potential spinoff from these studies for information technologies is the possibility of endowing machines with similar ‘social skills’. Music is an ideal vehicle for these explorations, and the Casa Paganini benefits from having a genuine performance space where music can be performed in a natural environment while being observed and recorded unobtrusively.
To stand any chance of understanding music cognition, we need to break the questions down into simpler ones. What are musical notes and how do we decide which to use? Do tunes follow rules, and if so, what are they? How do we use more than one note at a time – that is, what is ‘harmony’, and what is ‘discord’? How do we decode the incredibly complex sound signal that our ears register? Some of the hardest, but also the most important questions, lie beyond the realm of mere auditory processing – of turning sound into something intelligible. They are about turning it into something meaningful. How does music convey and elicit emotion, for example, and what is the relationship between music and language? What are composers and musicians trying to say? Can music in itself say anything at all? To the last of these questions, Gustav Mahler had this to say: “If a composer could say what he had to say in words he would not bother trying to say it in music.” The performance space of the Casa Paganini in Genoa
Antonio Camurri of the Casa Paganini-InfoMus International Research Centre in Genoa, Italy, and his collaborators have been attempting to understand the relationships between the movements of performers and the nature of the musical sentiments they aim to
Camurri has used video recording and mathematical modelling to study the modes of non-verbal communication between performers, such as a string quartet, that lead to entrainment of movement and
fet 09 Science beyond Fiction
I023
the conveyance of emotion to an audience. The interactions here can be analysed using a model of weakly coupled oscillators. The researchers have studied factors such as how visual information or the intended emotion of the music affects the movement and synchronization of the musicians.
selective cognitive deficits in music processing. Brain imaging enables the identification of the affected regions – and thus, in general, the deficient processing functions – so that the role of these functions in processing particular aspects of music can be isolated.
Stefan Koelsch of the University of Sussex in England uses brainimaging techniques such as functional Magnetic Resonance Imaging (fMRI) and electroencephalography (EEG) to study how cognitive and emotional processing are enlisted when we hear music. He has focused in particular on how the syntactic structures of music, such as cadences (series of chords that signify the end of a phrase or piece), are processed, and whether this bears similarities with linguistic processing. The brain produces characteristic electrical signals (so-called event-related brain potentials) in response to syntactic incongruities in language, and some of these are elicited by ‘improper’ cadences. While it is tempting to overstate the parallels between music and language (as the American composer Leonard Bernstein arguably did), the work of Koelsch and others has shown that they clearly do exist. This raises several questions about the neural resources that are used: do language and music share some of the same mental pathways, or are they comparable but parallel systems? What might this tell us about the evolution of music and language, and if or how they were related?
For example, one patient reported the loss of any emotional response to music and to the affective qualities in speech. He reported that music had come to sound “mechanical”. Warren and his colleagues found that this deficit could be attributed to a lesion in the planum temporale and other specific areas of the brain. Warren says that people’s response to music turns out to be a uniquely sensitive predictor of brain conditions that lead to a loss of ability to ‘read’ emotional states from, say, facial expression. Some effects of brain damage can be bizarre, and bizarrely specific. In one clinical case, a man with right temporal lobe atrophy became obsessed with polka music, listening to it 12-18 hours a day.
That last question has been tackled by Jason Warren of University College London. On the strength of evidence drawn from his and others’ research on music processing using brain imaging, Warren argues that music represents a symbolic mental code in an analogous way to language. He has obtained many insights from studying people in whom brain lesions have produced
Warren suggested that music may have originated from animal calls as the primary mode of emotional expression, while language split off from it as the main vehicle for semantic communication. In other words, he says, music may represent “the brain’s symbolic code for emotion.” This remains right now just one of a host of hypotheses about why we have music – perhaps it was (as Charles Darwin believed) a means of sexual selection (the best ‘singers’ get more mates), or of promoting group cohesion, or of facilitating motherto-infant communication, or of allowing calls and messages to be projected over greater distances. We may never know the answer. But thanks to studies in music cognition, we no longer need to resort to mere guesswork.
SAY IT WITH MUSIC Analysing the way musicians synchronise their body movements could reveal the secrets of non-verbal communication; interview with Antonio Camurri, scientific director of the InfoMus Lab at the University of Genoa.
Why do you study music? I'm interested in social creativity, how groups of people empathise, how they signal to each other and how they become synchronised as a group. How that happens is important. Studying music and musicians can give an important insight into these questions. Is it hard to bridge the gap between science and music? My background is in computer engineering. I'm not a musician but I have studied music and composition so I can talk to musicians and understand them. I've also learned to talk to and understand directors and dancers and choreographers. We have an agreement with an opera house that allows us to invite all the string quartets who are performing there to our labs. This allows us to work with the very best musicians in the finest quartets, such as the Arditti Quartet and the Quarttetto di Cremona.
What we have found over many trials is a certain level of coordination and this allows us to measure when it is higher or lower. Let's take a concrete example in which two violinists play a canon together. If, before they start, a psychologist induces a positive emotion in one of the violinists, then when they begin to play, they have a higher degree of entrainment or synchronisation. If the psychologist induces a negative emotion, we've found that the level of the entrainment is much lower. How does a positive mood improve synchronisation? This is something that I should leave to the psychologists. But I will say that when you smile, everyone smiles with you. And when you are sad, you are alone. But this is just my conjecture.
What kind of experiments do you do? When they play together, musicians enter into a kind of resonance. They call it the swing or the groove or the raga depending on their culture; artistically there are many terms.
Presumably it is not just in music that this kind of synchronisation occurs. No. These ideas can be applied to many other situations such as negotiations between small groups of people. Music is purely nonverbal and emotional. It's simpler than spoken negotiation where there are many other layers of interaction and which make the analysis much more difficult. By studying only the non-verbal, full-body gestures we can discover things that can be used in other fields.
One thing we are interested in is entrainment, the coordination between the musicians. So in some of our experiments, we try to measure the phase synchronisation between the musicians using a specific movement such as the motion of the head. So we measure the speed, position and attitude of the head and the upper part of the trunk.
What kind of applications are you thinking about? In future, various internet applications such as social networks are likely to be more embodied perhaps with avatars representing people in virtual worlds. In these situations, understanding how non-verbal communication works will be very important.
I024
fet 09 Science beyond Fiction
I025
A brain music session at 'casa Paganini'
I026
MIND OVER MUSIC How the Multimodal Brain Orchestra works.
The orchestra sat quietly as a team of technicians hooked them up to brainwave monitoring machines. As the orchestra's brainwave traces began to appear on a giant display screen in the auditorium, the audience watched spellbound. Then, without a conventional instrument in sight, the performance began. This is the Multimodal Brain Orchestra which premiered at the FET Conference. Their multimedia show, Xmotion, is unlike anything ever performed before. It is a mixture of film, sound and data controlled entirely by the brainwaves and emotional states of the orchestra and its two conductors. The music consists of three tracks playing simultaneously. The first track is generated by a computerised “composition engine” that combines samples of music and sounds taken from a huge database. The engine's choice is determined by a set of parameters which, when varied, change the type of music it chooses, as well as each clip's duration and volume. These parameters are not set by the computer but instead by the heart rate, skin conductance and breathing rate of the orchestra's “emotional conductor”.
This data is displayed at the bottom of a screen behind the orchestra. The second track is generated by the brainwaves of the four members of the orchestra. The two right hand members of the orchestra focus on a type of brainwave called the P300 response. By controlling this response, the players can select one of 25 pre-recorded samples of music to play. Their choice is indicated by a dot on their brainwave trace on the display. The two orchestra members on the left focus on a brainwave output called Steady State Visual Evoked Potentials. This can take one of four states which determine the property of the notes being played, such as the whether they are short or held for longer. The conductor tells the orchestra members when and how to play, just like a conventional maestro. The third track is a fixed pre-recorded piece of background music that determines the start and finish of the 30 minute performance. It also ensures that some music plays should there by any technical problems during the show.
fet 09 Science beyond Fiction
These three tracks are played simultaneously. The music contrasts the deliberate actions of the orchestra members, who can control their brainwave outputs by looking at particular flashing lights in front of them, with the unconscious level of arousal of the emotional conductor who cannot easily control her skin conductance and heart rate, except by her subconscious response to the music and to a film she watches through a head-mounted display.
I027
This film is also shown to the audience. A second version of the film is also shown which has been edited in real time using parameters set by the arousal level of the emotional conductor. The overall result is an extraordinary event that exploits state-ofthe-art brain wave-measuring technologies to produce theatre of the highest order.
Performance of Multimodal Brain Orchestra at FET09
SPARE MY EMOTIONS: I’M A COMPUTER! Understanding the emotional context of the online world.
“No need to get emotional about it” is a response often heard when trying to calm someone down. But the fact is we do need to get emotional. Emotions are the backbone of family life; they enable us to react in dangerous situations; they create the group experience of a football crowd, a pop concert or a historical event. Emotions are part of what makes us human. With information technology occupying such a central part in all our lives, it’s important to ask whether there are emotions in cyberspace too? Since cyberspace is just another human space, it’s bound to have an emotional context. However, it possesses special features that make social interactions between people different from those taking place in the offline world. One difference is the often much shorter lifetime of e-communities compared to their offline counterparts. Since participants of internet forums or discussion groups are also less bounded by local social norms, they may interact more quickly and express their feelings more often. The internet, for example, is well-known as a site for the expression of strong emotions, for example in ‘flaming’. It also hosts many environments in which multiple participants engage with others. What’s to be gained by studying such phenomena? Well, firstly understanding how emotions and intuition reach information technology could help us to build better ICT systems. Humans might
also benefit from ICT systems that were able to react emotionally and, ultimately, that were sensitive to emotions. More fundamentally, modelling emotions in artificial systems might also add to our understanding of humans at a psychological level. Emotions are complex processes. Behaviour, expression, physiological changes in the brain and in the body at large, motivational processes, and subjective experience are just some of the factors involved. These components are only loosely related (what researchers call ‘exhibiting low coherence’). They are constrained by our biology but also constantly shaped and modulated by social and cultural contexts. And there is constant mutual interaction with processes such as attention, perception, and memory. All of this serves to make the study of emotions a challenging field of science. “If we consider ICT-mediated communities as complex systems, then emotions can be seen as a form of emergent behaviour”, explains Prof. Janusz Holyst of Warsaw University of Technology. Prof. Holyst is leading CyberEmotions, a new FET-funded project focusing on the role of collective emotions in creating, forming and breaking-up e-communities. “Empirically, we concentrate on the issue of how to support and maintain the emotional climates of security, trust, hope, and freedom in future techno-social communities and how to prevent or resolve conflicts within them”, says Prof. Holyst.
I028
fet 09 Science beyond Fiction
Emotions in e-communities As part of the project Mike Thelwall, of the University of Wolverhampton, UK, is studying the popular social network site MySpace. This is a good candidate for large-scale social analysis because members’ personal profiles are freely available. Taking a random sample of around 15,000 members, Thelwall found that the median age of members was 21 and there was an overall bias towards teenage members. In addition, younger members tended to have many more friends than older members. The analysis highlighted several gender differences, including that both male and female members preferred to have a majority of female friends and a majority of female close (top 8/12/20) friends. “Our results suggest females are better users of MySpace or otherwise more desirable as online friends”, comments Thelwall. An initial exploration of expressions of emotions found a possible explanation for this: positive emotions are mainly directed at females and mainly made by females. It seems that this positivity may make women more engaging or supportive as friends. Emotions in smart networked environments Cyber networks aren’t just online, they’re all around us, for instance in the interconnection of mobile phones, mp3 players, health monitors, and other devices. These smart environments too have an emotional component and are the focus of a second FET project, Socionical. “In smart environments systems have the ability to monitor user actions and to adjust their configuration and functionality
I029
accordingly”, explains Paul Lukowicz, of Passau University, Germany, and a lead researcher in the project. “The system reacts to human behaviour while at the same time influencing it. This creates a feedback loop and leads to a tight coupling between the human and the technical system. At the same time there is a complex pattern of communication from human-human, human-device, and devicedevice. This leads to ad-hoc coupling between components and different feedback loops.” The project will study global properties and emergent phenomena that arise from feedback and coupling in such smart environments based on two concrete scenarios: transportation and emergency/ disaster management. Whole body expression A third aspect of where emotions overlap ICT is the perception of body language in virtual environments. In our everyday lives we react subconsciously to signals given off by someone’s body language. Bodies provide eloquent emotional signals, and it is part of our normal social competence to adapt spontaneously and effortlessly to the continuous cues they provide. Although this field has been well studied by social psychologists, there is little research within the neuro-sciences on how emotional body expressions are recognised. Closer study of this type of whole body expression will enable us to build more realistic and sensitive virtual environments. This, in turn, will take us one step closer to the goal of ‘presence’ – where systems are so realistic it is impossible to tell what is virtual and what is real.
MATHEMATICS BEYOND LOGIC Mathematics may require an element of uniquely human creativity.
I030 A century ago, mathematicians felt sure that all of mathematics would soon be placed on an absolutely solid, logical foundation. Their dream, expressed most forcefully by the German mathematician David Hilbert in 1920, was to prove that all mathematical truth can be derived from a few simple axioms, establishing all of mathematics as a tidy and consistent framework of dependable and logical truth, set apart from the messy uncertainty of the physical world. But this dream of logical purity soon met with catastrophe in the shocking work of a young Austrian mathematician. In 1931, Kurt Godel proved quite to the contrary that any scheme whatsoever for deriving mathematics from logical foundations has to suffer from at least one of two fatal problems. It must either have holes in it, so that some mathematical truths just cannot be proven logically, leaving them hanging in the air, seemingly true but unfounded, or it must be inconsistent, in that some things will be both provably true and provably false at the same time. To mathematicians of the time, says Gregory Chaitin of the IBM Thomas J. Watson Research Center, Godel’s result seemed like a death blow to mathematical logic. “He seemed to have destroyed mathematics,” he says, “or at least shattered it to pieces. He demolished the hope that one might find a deductive basis for all mathematics." Even so, the Second World War soon intervened, and mathematicians after the war mostly went on as before, in large part ignoring Godel's results. Today, controversy still swirls about what Godel’s unsettling "incompleteness theorem" really means both for mathematics and science. Does it really imply the impossibility of mathematical certainty? And if so, how can anything in mathematics or science be established as true knowledge?
Copyright Justin Mullin, reprinted with permission. www.justinmullins.com
fet 09 Science beyond Fiction
Gregory Chaitin: "It now looks like some kind of non-mechanical creativity is required for mathamatics."
One of the most provocative ideas, being explored by Chaitin and others, is that Godel's work should be taken very seriously – and that it may well imply that mathematics is not a matter of pure logic at all, but actually requires an element of uniquely human creativity. The British mathematician Alan Turing, the inventor of modern computation, later extended Godel's work by asking if there is any way to decide by computation if a given computer program will ultimately halt, having terminated naturally, or might instead go on calculating forever. Turing found that there is no general method for answering this “halting problem”; many such problems, while they clearly have answers, simply lie beyond computational solution.
Gregory Chaitin: "It's important to reject the prevailing norm in mathematics that proof is the way to judge things. This paradigm is a prison."
Nowadays, mathematicians and physicists -- stimulated in large part by work of Chaitin, Kolmogorov and others in the 1960s -- have linked this problem to something even more general -- complexity. Suppose someone presents you with a computer program designed to do some task. Can you decide if there might be a simpler program which could do the same thing? This question, it is now known, is also in general unsolvable by computation. Mathematics, that is, leads to the troubling notion that simply trying to tell if something is as simple as possible is closely linked to Turing's halting problem, and Godel’s incompleteness. “In a sense,” says Chaitin, “the limits to mathematical computability are worse than Godel or Turing ever knew.” In a session on algorithmic complexity at FET09, Chaitin described his recent efforts to take these insights deeper. His main focus has been on studying whether the problems of Godel and Turing kind are strange and in some sense rare, or instead might be more common, perhaps even ubiquitous. To do so, he’s explored the matter of calculating the likelihood that a truly random program will halt.
I031
Analysis of this question leads to a real number -- called Chaitin's constant Ω -- the digits of which are completely random. "This number is well defined mathematically," he says, "but it looks contingent because it has maximal irreducible complexity," as each digit brings totally new information, unpredictable from what has gone before. What this implies, Chaitin suggests, is that problems for which any computational approach must fail are indeed rife in mathematics, a truth that most mathematicians need to appreciate. "My message is that it's important to reject the prevailing norm in maths that proof is the way to judge things," he says, "This paradigm is a prison.” Rather, he suggests, mathematics needs to begin thinking beyond its historical obsession with logic and proof, and consider the possibility that mathematical truth may often have other origins. "The funny conclusion," he says, "is that it now looks like some kind of non-mechanical creativity is required for mathematics. Indeed, mathematics holds within itself a proof that creativity is necessary for doing it, and that there are problems for which there is no method." Godel's theorem was a big shock for mathematics, yet most mathematicians have gone on thinking it doesn't really apply to them. “I think my result,” says Chaitin, “makes incompleteness look a little more dangerous."
THE MATHEMATICS OF HUMAN MOVEMENT Finding hidden regularities and universal patterns in our daily movements.
But in the past decade or so, a number of scientists -- including physicists, engineers, computer scientists and some social scientists themselves -- have begun to question this traditional idea. They suggest that at least in some cases, it's not inherent human complexity which has held social science back, but a long-standing lack of any good means for gathering data on people in a reliable and objective way. To prove their point, they've begun finding ways to use modern information technology to gather and analyse human data on a scale never before imagined, and are finding some surprising mathematical regularities, hints of mathematical laws for the human world. Albert-László Barabási
The scientific study of people, their habits, attitudes and movements, their values and social norms, where they live and how they trade and work, has never achieved anything like the precision one finds in physics, chemistry and the other physical sciences. There are no mathematical laws of society, and very little can be predicted successfully. Even economics, the most mathematical of the social sciences has nothing to match the power of Newton's equations, let alone more accurate theories such as quantum theory or general relativity. The social sciences have always been in behind, and many scientists expect it will always be so. People, the usual argument goes, are immeasurably more complicated than the atoms and molecules of physics, which makes social science actually the much harder science; it shouldn't really even aim for the same kind of scientific standards.
Technology in the past decade, of course, has more or less by accident led to the automatic gathering of enormous amounts of quantitative data on human activities, ranging from patterns of e-mail use to consumers’ buying habits. People happily carry radio trackers and tags around in the form of mobile phones. “With this kind of data, we finally have objective measurements of what people do," says Albert- Laszlo Barabasi, a researcher studying human dynamics in this way at the Center for Complex Network Research, based at Northeastern University in Boston. “Our observations don’t influence them.” Analyses of such data give scientists a chance to identify regularities that may have been hidden from social research before. They can be viewed as the beginnings of a natural ecology of human behaviour, and understanding patterns of physical movement — the crude equivalent of animal foraging — offers an obvious first goal.
I032
fet 09 Science beyond Fiction
Three years ago, physicist Dirk Brockmann of the Max Planck Institute in Gottingen, Germany, took an indirect stab at the issue using the website www.wheresgeorge. com, which facilitates the tracking of dollar notes moving through the United States. People can go to the site and enter the date, their location and the serial numbers of dollar bills in their possession. As the bills move, the site shows their changing locations. Brockmann and colleagues found that almost 60% of bills starting in New York City were reported 2 weeks later still within 10 kilometres of their starting point. But another 7% had jumped to distances beyond 800 kilometres. The distribution of distances travelled over a short time followed a so-called power law pattern, with many short steps and few long steps. This pattern, it turns out, is well known to scientists studying the foraging behaviour of other non-human animals. It's been found, for example, in the movements of bumble bees, deer and a host of marine mammals. What it implies is that these animals tend to take lots of short steps in exploration, yet occasionally and consistently take long excursions, a mathematical pattern of explorations that has been proven to be optimal under some conditions. The long steps take an organisms out into totally new territory where they may happen on new sources of food. But Brockmann and his colleagues' data don't quite mean that people move around like other animals, because the movement of money isn't the same as that of people, though the two are clearly linked. To go further, a team led by Barabasi has more recently gone one step further, using anonymized mobile-phone data to track the movements of more than 100,000 people over a 6-month period.
I033
In a plenary lecture at FET09 entitled "From Networks to Human Mobility Patterns," Barabasi reported on their findings. The statistics, they found, again show a similar pattern, although with some additional complexity. In particular, the mathematics suggests a combination of two effects — first, a real tendency for individuals to move like other efficient, foraging animals, with many short movements and less frequent long excursions, but also a difference between people in the overall scale on which they move, with some people being inherently longer travellers than others. Indeed, when the researchers normalized the measurements so that the person-to-person scale factor no longer played a part, the data for all the participants fell onto a single curve, suggesting an underlying simplicity in the way we all move during our daily lives. “There are a lot of details that make us different,” says Barabasi, “but behind it all there’s a universal pattern.” This kind of work seems certain to take off in coming years as technology makes data gathering of many types much easier. Elsewhere, other researchers are now going into organisations and gathering data on the detailed behaviour of employees over many months. Such pioneering studies can explore who interacts with whom, how information flows through an organization, and the social patterns that influence key decisions. Such research is at the cutting edge of empirical social science, and illustrates how advanced ICT technology is having a profound influence not only on society, and the nature of our lives, but also on fundamental aspects of science itself.
VIRTUAL ECONOMIES Probing the complex feedbacks that make economic reality so hard to predict.
The worst global financial crisis in nearly a century has made one thing painfully clear -- we really don't understand the world's markets or economies. Part of the reason, surely, is that economic reality depends on the behaviour of untold millions of individuals, for which human psychology offers a very uncertain guide. But equally serious, many scientists suspect, are shortcomings within the basic conceptual framework of economics itself. Economists still try to understand markets and economies mostly using ideas from so-called equilibrium theory, which views economic reality as emerging out of a simple balance of forces. The workhorse models behind this view -- so-called general equilibrium models -- suppose that markets have no internal dynamics of their own, but only change in response to external shocks; they also ignore entirely the influence of self-propelling rumours and gossip, fears and irrational expectations. Standard economic models don't even attempt to account for the immense diversity of different businesses or people with differing aims. But in response to a chorus of critics in recent years, economics is now changing rapidly. In particular, a small group of researchers has begun exploring how to build a more powerful economic science by using the power of computation to go beyond equilibrium thinking and to get some insight into the underlying ecology of beliefs and expectations that drive economic trends by modelling them explicitly. At the University of Genoa in Italy, for example, Silvano Cincotti helps lead a European project called EURACE which has the bold aim of doing nothing less than simulating the European economy from the bottom up.
As he and colleagues described in a session at FET09 on agent-based technologies and their potential use in policy making, their idea is to populate a virtual economy with artificially intelligent agents who trade and interact and compete with one another much like real people. The resulting model will not simply proclaim the truth of market equilibrium, as the standard theory complacently does, but let market behaviour emerge naturally from the actions of the interacting participants, which may include individuals, businesses of many kinds, banks, even regulators. A number of models of this kind have already been developed in some areas of economics and used to explore the emergence of trading behaviour, or the way industries evolve. Even so, most existing models focus on just one industry or market, and involve relatively small populations of agents. In contrast, the EURACE project aims at creating an agent-based model of the whole European Union, including various artificial markets for real commodities (consumption goods, investment goods and labour), and also markets for financial assets (such as debt securities, bonds and stocks). In the credit market, for example, firms interact with banks to obtain loans, and banks compete with one another by offering different interest rates. The banks work to judge the credit worthiness of the firms applying for loans, and a market in financial assets links the business sector with financial firms. As in the real world, firms in the model also issue common stocks and corporate bonds to finance their investments and production, while households similarly invest in asset portfolios, and the government sells government bonds to finance its budget deficit.
I034
fet 09 Science beyond Fiction
Michael Oborne: "The financial crisis has brought to the forefront that no one was watching the candy store... It was a giant Ponzi scheme, and people didn't know where the boundaries of stability were... Traditional tools of finance don't really see these risks."
Based on real economic data, the model currently represents the relevant economic elements over the EU-27 territory in a way that roughly reproduces reality. The complete model has been designed so to include millions of agents among households, firms and banks, all of which have the capacity to learn and change their strategies if they find more profitable ways of doing business. The model also includes other agents such as national governments, the single central bank and the stock exchange. The model is still being developed, and the first step will be to test its ability to reproduce a range of statistical patterns found in real economies, including the distribution of firms’ sizes, the distributions of income and wealth, as well as sophisticated mathematical properties of financial time series. This will provide both a check on the model's ability to recreate reality accurately, and a more powerful way to explore these patterns that anything in current economic theory. To support such a complex model, a completely new computing technology was developed. Agent-based supercomputing uses the formal design framework FLAME for generating the incredibly massive and detailed models that can then be run on parallel supercomputers. This required the solution to many complex and challenging software engineering problems by teams at the University of Sheffield and the Rutherford Appleton Laboratories in the UK. "The preliminary numerical exercises we've already conducted," says economist Herbert Dawid of Bielefeld University in Germany, "suggest that we indeed are on the right way to obtain new and interesting results."
I035
But the more important ultimate application for the model will be in helping policy makers in their decisions. Indeed, once the model reproduces much of the European economy as virtual reality, it could be used as a kind of policy wind tunnel to test out policies, doing "what if" type experiments of a kind not at all possible today. Cincotti suggests that policy makers could make their costly mistakes in a virtual world, and hopefully devise better policies for regulation, economic stimulus and so on in the real world. In this respect, similar efforts elsewhere have already illustrated the power of this approach to gain deep insight into economic systems. Some studies, for example, have looked at how the level of credit in a market can influence its overall stability, and how financial crises may emerge naturally from the very makeup of markets, as competition between investment enterprises sets up a race for higher leverage, driving markets toward a precipice that we cannot recognize even as we approach it. Researchers have also begun applying agent based models to understanding the manifold complexities of deregulated electricity markets. Several years ago, for example, the state of Illinois hired researchers to build a agent based model of their electricity markets, as these currently represent the only way of reckoning intelligently with the design of extremely complex deregulated electricity markets, where faith in the reliability of equilibrium reasoning has already led to several disasters, in California, notoriously, and more recently in Texas. The EURACE represents the boldest step yet in what promises to be a much more successful and scientific future for economics.
TRUST FOR THE GLOBAL NETWORK New techniques to ensure the trustworthiness of mobile applications and services.
Nowadays, it seems that no-one is without a mobile device of one form or another. Mobiles long ago ceased just being for talking. With today’s devices we can browse the internet, track our location using GPS, initiate bank transactions, connect to friends through social networks, and much more. Mobile gadgets have become our trusty friends that we carry in our pockets and as more services become available we become ever more dependent on them. But just how trustworthy are they? All these advanced functionalities are enabled by data and software code moving freely between mobile devices and servers and PCs elsewhere. This very mobility makes it difficult to ensure that the software on such devices runs safely and reliably. The problem will be magnified many-fold in the future as more and more everyday devices and objects get ‘smart’ and are able to communicate via the internet. An early example of this is in the recent rise of the ‘app store’, where users are able to download large numbers of applications for their mobile devices, some of them free, others available for purchase. The stores have proven to be hugely popular with both consumers and the software developers, who may be companies or individual users. The app store is a gatekeeper and its success depends on trust. Users trust the store to provide worthwhile and safe applications at reasonable prices. Software developers trust
the store to provide secure and reliable distribution with reasonable remuneration. At present, trust for such mobile code is built from reputation and supported by digital signatures based on cryptographic keys. This is enough to trust the identity of others, and we might trust their reputation. But it doesn’t tell us anything about the software itself. We need information about the software and how it will behave in different environments. Trust and the global computer Trust is just one of the challenges faced by the rise of so-called ‘global computing’, the massively distributed, open computing environments we see emerging in all areas of modern life. From power grids, to financial trading systems and service-oriented computing structures, large networked systems now form an indispensable part of our infrastructure, and failures of these systems can have dramatic consequences for our economy and well-being. “Trust and reputation are central to global computing”, says Ian Stark of the University of Edinburgh. “As software permeates our everyday lives, we need to know where code comes from and whether it can be trusted. Digital signatures tell us who code comes from. But to trust the code itself we need a more sophisticated approach, which we call digital evidence.”
I036
fet 09 Science beyond Fiction
Since 2001, the Global Computing initiative, part of the EU’s Future and Emerging Technologies (FET) action, has addressed long-term research for such very large-scale distributed systems. Initial work focused on laying foundations, in areas such as programming languages and algorithms, while later projects have addressed families of global computers as well as issues such as security, scalability, and resource usage and management. “Global computing offers us an exciting new paradigm”, explains Martin Wirsing of LMU Munich, Germany. “Application components will be capable of being assembled with little effort into services that can be loosely coupled to create dynamic business processes and applications spanning organisations and computing platforms”. Wirsing is the coordinator of SENSORIA, one of the FET projects in this area. Digital evidence One approach to digital evidence is so-called proof-carrying code, which is a certificate presenting data about the software itself. It provides digital information about the program, confirming key aspects of software behaviour and can be checked by the user or any third party. As with digital signatures, digital evidence relies on mathematical foundations to ensure no certificate can be forged, and that code cannot be tampered with. The two are complementary approaches. Work under the EU research project MOBIUS is extending the proofcarrying code approach to a range of code verification techniques. Each creates digital evidence in certificates that guarantee program behaviour. “Proof-carrying code gives users independent guarantees of the safety and security of the applications they use for their mobile phones and PDAs. It’s a very promising approach for trustworthy global computing”, explains Stark, who is a lead researcher in MOBIUS.
I037
Digital evidence can be used in a variety of ways. For instance in the case of the app store, the store can communicate with the user to provide evidence that its systems are safe and secure. The store can also communicate with developers, stating objective policies for accepting mobile code and applications. And the developer can communicate back to the store providing evidence that it meets these policies. Ian Stark, sees important work ahead. “We need to find out how to strengthen and extend the techniques for digital evidence to databases, cloud computing, and peer-to-peer (P2P) computing. Also, we have to look at how providers (such as software authors and library writers) can create digital evidence and the appropriate tools for them to use? Thirdly, who will be the users of digital evidence and how will they be able to check it?” All together for ensembles Approaches such as these should help in addressing the next big challenge in global computing, which will be posed by so-called ‘ensembles’. Ensembles are systems with a massive number of nodes, such as sensor networks, networks of personal devices, assisted living environments, smart cities, and swarm robots. Such systems often exhibit complex behaviour or involve complex interactions between nodes. They have variable network topologies and need to adapt to changing requirements. Many of them rely to a significant degree on software-based services. “Our current processes are not sufficiently reliable to build ensembles”, notes Martin Wirsing. “Even worse, we don’t know how to build service ensembles that are reliable, in other words that will always deliver the service to the user in a reliable way.” Finding ways to engineer trust into these hugely complex ICT systems is a key requisite for success of the future networked world.
PENETRATING VISION Visualising large sets fo data.
On a typical day, the international courier Fedex handles some 100 million individual transactions. Worldwide, VISA records 150 million purchases. Every 24 hours, more than 300 million long distance calls stream over ATT's telephone network. Every one of these shipments, purchases or calls generates a file of data, and this staggering amount of data increasingly drives our information-dominated age. It is routinely archived away as a valuable storehouse of insights and value. Yet the sheer volume of data, and the increasing rate at which it is produced, also pose a problem -- how to avoid drowning in it, and how to find useful ways to get meaningful insight out of it. A popular blogger, for example, scans more than 5,000 news feeds each day, and still can't keep pace with all that is going on. Biologists are now sequencing genes so quickly that just finding ways to store this data in an organized way has become a major challenge. Yet just as Google gave us a tool to make useful the overwhelming information store of the World Wide Web, computer scientists are pioneering a raft of new techniques to help us penetrate the growing forest of digital data. Among the most exciting is an ambitious effort to combine humans' innate skills for recognizing patterns and reasoning creatively with the raw processing power of modern computing to make both more capable than they could be alone. "Computers do much better at some things than people," says computer scientist Daniel Keim of the University of Konstanz, "but not at perception. The human visual system is still better, and humans are also better at creativity and general knowledge. Nothing in computation can compete with a 10 yr old child when it comes to wide ranging knowledge." In pursuing a beneficial union, researchers in this area of "visual analytics" aim to exploit the insights of disciplines ranging from
computer science to human psychology. The basic idea is to take enormous volumes of information, often from diverse sources, and find ways to present this naturally unwieldy information visually so that people can make use of it easily and quickly, seeing connections they'd never see on their own. An air traffic controller, for example, might see a collage of coloured lines reflecting the trajectories and locations of various aircraft, as well as airport traffic conditions, and be able to see immediately a red triangle indicating a potential problem, long before they could work this out from the data alone. Within a decade, these insight-enhancing tools will be indispensable for public health officials, leaders of major businesses and many others in positions of authority. But visual analytics won't remain a specialized tool for long. "Pretty soon, it won't only be high authorities using these tools," says Keim. "We envision them being common in the workplace for every employee to use who has to deal with the analysis of the ever changing information." Which is likely to be just about everyone, ranging from financial analysts charged with finding cues to sound investments in rivers of financial and economic data to doctors aiming to make better diagnoses by combining the vast knowledge of medical science with data on an individual patient's history, lifestyle and physiology. Making it possible, Keim points out, will mean resolving a host of pressing issues. A first problem is that data currently tends to get collected and stored in whatever form is most convenient for whoever is collecting it, without thought for how it might one day be linked up to other, very different data, and to beneficial effect. Computer scientists have to find ways to store photos, time series, scientific data, ordinary text and data of many other kinds so that it can be combined in a meaningful way, and lead
I038
fet 09 Science beyond Fiction
to insight. Progress on this seemingly mundane issue would free up people from the unproductive task of converting data formats and let them focus their experience and skills on more important work. A second problem is learning how to pare down massive data sets into more manageable parts. Almost any data set has errors in it, or suffers from corrupting influences, and computer scientists need to find ways to automatically identify these uncertain parts so they can be eliminated, or at least be recognized. Even without errors, moreover, most of an enormous data set may be completely irrelevant to certain kinds of questions; hence, visual analytics aims to find techniques for filtering data to make it more manageable to analyses focused on specific questions. Finally, another key challenge is to find efficient ways to turn extremely large and complex data sets automatically into visual displays that present information usefully to the human brain. This demands not only good technology, but attention to human biology as well. We naturally process some kinds of visual cues automatically and without conscious attention. Things like line orientation, length, width, size and curvature tend to “pop out” at us, and data presented this way enters the mind easily. That's good if such information is important, and very bad if it is distracting. These are only some of the demanding issues research in visual analytics needs to tackle, none of which has yet been solved completely. But visual analytics promises to bring huge advantages to any human facing information overload, a problem likely to become ever more common in tomorrow's world. Visual display, the science suggests, can expand a person's working memory, let them search through lots of information very quickly, bring patterns out more obviously, and see relationships that would otherwise be obscure. All of which we'll need in tomorrow's even more complex and faster changing world. Display from the Vizmaster project
I039
BETWEEN TWO WORLDS Interview with Barbara Mazzolai. Navigating the tricky terrain between different disciplines. Success in science today takes more than just excellent skills and specialist knowledge. It also means being able to cross the boundaries that separate disciplines, because today's problems often don't respect those
"Everything was suddenly different," Mazzolai recalls. "It was really hard to keep up and to be accepted at first, but the experience was also very valuable. Joint labs are very good for learning new fields because every day you learn something from other people." In her work, she soon strayed a little from biology into chemistry, studying how volcanoes play an important role in contributing pollutants such as mercury to the regional atmosphere. A short further step found her developing mercury sensors in what can only be described as applied physics.
boundaries. The career of Barbara Mazzolai, once a biologist in the strict sense, now turned part chemist and robotics engineer, illustrates the increasingly typical
Now, as an Assistant Professor of Bioengineering at the SSSA, her expertise even includes robotics -- but robotics with a decidedly biological edge. As she points out, her favourite organism, the octopus, has with its tentacles provided robotics experts with an excellent model of a completely flexible structure which can also become very stiff at times and exert forces on its environment.
experience of a scientist in our multi-disciplinary age. "Before, as a biologist, I used to see octopus from a biological point of view," she recalls. "Now I seem them also from the engineers' point of view. It was originally incredible for me just to think about them so differently."
In the mid 1990s, Mazzolai was a fairly typical marine biologist. At the CNR-Institute for Biophysics in Pisa, Italy, she specialized in studies of environmental pollution
Embracing this ability to stand between two fields that are rarely linked, Mazzolai is now working to develop robots inspired by plants for soil exploration. "Plants are fixed in space and don't have the option to move," she notes, "so they have to develop different strategies."
and its effects on marine invertebrates, especially the octopus. Then her career took an unexpected turn.
Currently, there are no robots inspired by plants, but she's working to develop new systems for environmental applications in this way, robots that would grow roots and communicate together much as some real plants do.
Working on a Master’s Degree in Eco-management at the Scuola Superiore Sant’Anna (SSSA), also in Pisa, she suddenly found herself working alongside physicists and engineers, and she was the only biologist.
Every step of the work is highly interdisciplinary, which Mazzolai says puts great demands on patience and listening to people with very different points of view. "The most difficult and important thing, of course, is to talk with one another. Interdisciplinary work is very rewarding, but not always easy. You need to spend a long time to understand the other person’s point of view."
I040
fet 09 Science beyond Fiction
I041
Barbara Mazzolai beside a robot for garbage collection developed in her laboratory
I042
THE NEXT INFORMATION REVOLUTION Exploiting the baffling paradoxes of the quantum world.
A century ago, physicists' experiments identified a host of troubling paradoxes in the behaviour of matter at the atomic and sub-atomic scales. The trustworthy physics of the classical era failed utterly, and physicists were forced to accept, among other things, an element of inherent randomness in the atomic world, expressed in Heisenberg's famous Uncertainty Principle. The quantum theory that ultimately emerged to replace classical physics is today arguably the most accurate theory of physics ever developed, even if it describes a world that often defies our ordinary intuitions about the nature of reality.
In recent decades, physicists have made enormous strides in learning how to control single atoms and molecules, doing experiments of which Einstein, Bohr and the other founders of quantum physics could only dream. In so doing, they've come to recognize that information in the quantum world obeys rules unlike those of classical information theory, and indeed allows logical operations that are inconceivable from a classical perspective. This discovery has fired imaginations and stoked a vibrant research area seeking to bring into reality a number of technologies that truly seem like science fiction.
Today, quantum theory has also taken on a decidedly practical edge, forming the basis of industries touching on all our lives. It's the foundation of the computing and electronics revolution of the past 50 years, with all that revolution's myriad creations, including the Internet and World Wide Web, and lies behind the laser, now so important for basic science, medicine, and telecommunications. But these applications may still only hint at a deeper quantum revolution yet to come.
Most alluring, perhaps, is the promise of quantum computing. Each information bit in an ordinary computer, of course, can represent 0 or 1. But the strange properties of quantum particles – in particular, their inexplicable ability to be in a "superposition" of two or more states at a time -- means that a quantum bit, or "qubit," can explore both 0 and 1 simultaneously. This means, in effect, that a computer designed along quantum principles can explore many different computational paths all at once, thereby achieving an exponential increase in computing power -- at least in theory.
fet 09 Science beyond Fiction
I043
The challenge to do this practically is enormous, however, as the information held in any quantum state -- of an atom or molecule, for example -- is extremely fragile. Almost any interaction with its surrounding environment will destroy it. Researchers are exploring a number of possible ways to isolate and protect a sufďŹ cient number of qubits to allow powerful computations -- with atoms trapped in magnetic cells, for example, photons in electromagnetic cavities, or electrons in solids. The record so far is around 10 qubits, but many physicists predict that scientists could within a decade have a functioning quantum computer capable of doing computations immeasurably faster than today's computers, or indeed any classical computer built in the future.
company in Geneva, id Quantique, is now marketing a commercial system for quantum key distribution with likely clients being banks, companies, and certainly governments, all eager to take advantage of a truly historical new technology.
Even before then, however, other research in quantum information may bring applications with enormous practical consequences. One such application is quantum simulation -- a limited form of quantum computation -- which would make it possible to do scientiďŹ c calculations for complex quantum systems, such as high-superconductors and other modern materials, which simply cannot be done today with ordinary computers [see box: Quantum Simulation].
Today, of course, laser applications have proliferated.
Equally exciting is the quantum physics of secrecy. Theorists noticed two decades ago that the laws of quantum physics might be exploited to make truly unbreakable codes. An eavesdropper facing a well designed quantum system would, by the laws of physics, necessarily leave a trace and so be detected. Physicists have now demonstrated the technique, for example, in experiments running over many kilometres in ordinary ďŹ bre optic communications lines, as well as over satellite links. The physics is now so well understood that a
Where is it all leading? When the laser was invented, as physicist Anton Zeilinger of the University of Vienna in Austria notes, even its inventors struggled to envision how it might be used. "The only thing they could think of," he says, "is that one might be able to burst a balloon held inside of another balloon without bursting the outer one."
Similarly, the ideas of quantum computing, simulation, and cryptography very likely scratch only the surface off how quantum information may be used in the future. Another recent and very simple application is the commercial production of devices to produce truly random numbers, often crucial for good science -especially in simulations of processes involving probabilities -- and also in commerce, particularly the huge gaming industry (see box: Random Numbers). Quantum theory is notoriously difficult, conceptually abstract, and violates everyone's intuition. The computer age is the first of its consequences, but we have likely not yet witnessed the deepest practical implications of this peculiar science invented a century ago.
I044
QUANTUM SIMULATION
RANDOM NUMBER GENERATORS
The Oxford English Dictionary defines ‘simulate’ as “to give a computer model of a process.” But this way of thinking is a little narrow. Simulation means exploring the workings of one thing by studying something else, whether it's a general purpose computer or not. We only need confidence that the two things are somehow similar, with one being simpler, perhaps, or at least easier to control.
Generating truly random lists of numbers sounds easier than it actually is. Being deterministic, computers cannot do it; all they can manage is to produce pseudo-random strings of numbers with subtle patterns which make them, actually, not quite random at all. idQuantique has now exploited the nuances of quantum physics to build a quantum random number generator of extremely high quality.
Well before they have a fully functioning quantum computer, physicists expect to create physical quantum systems -- based on photons, for example, trapped in arrays of tiny electromagnetic cavities -- able to act as models for the dynamics of a wide range of other quantum systems. Atoms cooled within a fraction of absolute zero represent another promising technology for such models, which would be quantum computers, but only for a limited class of applications. With a quantum simulator, physicists will be able to probe otherwise elusive phenomena, ranging from quantum fluctuations in one dimension to disorder-induced localization, and from exotic quantum phases relevant to high-temperature superconductivity or lattice gauge theory. Within a few years, these systems should exist and give an enormous boost to the abilities of physicists, chemists and engineers to explore the behaviour of complex quantum devices and new materials. And it is all the rather surprising result of the seemingly mundane — the development of better ways to trap and hold atoms and of optical means for controlling their interactions.
Their device, called Quantis, exploits an elementary process of quantum optics. It sends particle of light -- photons -- one by one into a semi-transparent mirror. Quantum physics implies that whether a photon passes through or not is a truly random process. Hence, with a detector ready to catch those going through, Quantis can easily generate a truly random string of 0s and 1s -- the binary representation of any list of random numbers -- merely by counting each detection as a 1, and each failure as a 0. Albert Einstein, one of quantum theory's founders, once objected to the apparent randomness of the theory, saying he could not believe that "God plays dice." This is today especially ironic as the Quantis device has been certified by the Swiss Federal Office of Metrology, which has confirmed the high quality of its random output, and also been approved by the Maltese Lotteries and Gaming Authority, Europe's most dynamic location for remote gaming.
fet 09 Science beyond Fiction
I045
WHEN ENTANGLEMENT WAS TABOO Quantum physicist Anton Zeilinger recalls in this interview how he first came to grips with entanglement. How did you become interested in quantum physics and entanglement? I was always interested in science as a child and became fascinated by quantum physics from the moment I heard about it. At that time, in the mid-1960s, nobody talked about entanglement. It was not mentioned at all in textbooks. Why was entanglement ignored? That’s a good question. Perhaps because Einstein first raised the notion of entanglement and criticised quantum mechanics strongly. This created a big discussion and the physics community decided that the fundamentals of quantum mechanics had been settled by these early discussions. It became unpopular to delve into them further. It was actually hard for a young scientist to work on these things in those days.
Anton Zeilinger is professor of experimental physics at the University of Vienna
Entanglement is the idea that widely separated particles can share the same existence and lies at the heart of the foundations of quantum mechanics. But it wasn’t so long ago that discussing entanglement was taboo. Quantum physicist Anton Zeilinger recalls how he first came to grips with the phenomenon.
How did things begin to change? It changed somewhat with the ideas of the CERN physicist John Bell but not right away. [Bell developed a way to measure entanglement in the 1960s]. I met Bell for the first time in 1977. But even at that time, the fundamentals of quantum mechanics were not considered the right thing to work on. My background helped a great deal. I grew up in Vienna where we have a long tradition of studying the fundamentals of science. My phd supervisor Helmut Rauch allowed me to work on the foundations at that time, about 1974-75. I only found out much later that this would not have been possible in many other places.
I046
fet 09 Science beyond Fiction
Nevertheless, I didn’t really work on entanglement until 10 years later in 1984-85. What triggered your interest in entanglement? In the early 80s, I was working at MIT with Mike Horne, one of the first guys to study entanglement. One morning in the lab he asked if I wanted to go to Finland. He said there was a conference there celebrating 50 years of the famous paper by Einstein, Podolsky and Rosen in which they introduced the idea of entanglement. I replied: sure, why not. Horne said that he also wanted to go to Finland and suggested that together we invent a new experimental approach to entanglement. And so we did. We invented the first generalisation of entanglement that goes beyond spin or polarisation. This turned out to be very important because it freed entanglement from the limitation of being linked to a specific quantum feature such as spin or whatever. At the same time researchers started to study entanglement experimentally and in the 90s entanglement was suggested to be a means to store and process information in a fundamentally new way. This started the now burgeoning field of quantum information that uses entanglement as a resource to speed-up computation or make communications more secure.
I047
I048
THE DNA WORD PROCESSOR
A new and easy way to create and edit DNA sequences.
If the link between the invention of the printing press in 1439 and the invention of the DNA amplifying technique called the polymerase chain reaction (PCR) is not immediately clear, a few minutes with Ehud Shapiro will set you straight. The printing press changed the world by allowing people to mass produce words. The word processor, which we use to creates and fine-tune text before it is mass produced, is the direct descendant of Gutenberg's invention. PCR has played a similar role, allowing molecular biologists to mass produce DNA molecules. And yet, the same biologists are still awaiting the equivalent of a word processor that allows them to create and edit DNA molecules before they are mass produced. That's where Shapiro steps in. A computer scientist at the Weizmannn Institute of Science in Rehovot, Israel, Shapiro and his team have developed the equivalent of a DNA “word processor” and believes it could revolutionise the way molecular biology is carried out.
fet 09 Science beyond Fiction
PCR was invented in 1983 by the American biochemist Kary Mullis as a way to produce multiple copies of specific fragment of DNA. The method exploits an enzyme called DNA polymerase which makes copies of DNA. These copies are then used as templates for a new round of copying and so on, creating a chain reaction that in a single afternoon can produce billions of copies of a specific DNA molecule.
Ehud Shapiro
The significance of Mullis' invention is hard to underestimate. At a stroke it made possible the large scale replication of DNA and is now used in everything from genetic fingerprinting to the diagnosis of hereditary genes. But this is just the beginning of what PCR is capable of, says Shapiro. He believes that if we think of DNA as text, then PCR is the equivalent of the printing press: it allows large scale replication of this text. But today, we process text using computers that allow us to compose text, insert and delete words, as well as to cut and paste entire sections of text. Exactly these kinds of operations take place in labs all over the planet, carried out by graduates and postdocs armed with pipettes and test tubes at a painstakingly slow pace. Composing a DNA molecule with a bespoke sequence in this way can take weeks. The entire process is more analogous to editing text on the oldfashioned linotype machines that were once used for setting text on printing presses. Today, of course, linotype machines have been replaced by word processors which make changes electronically and send the words and layout directly to a printing press. What biologists all over the planet are crying out for is the equivalent of the world processor for DNA, says Shapiro. With a computer scientist's eye for detail, Shapiro and his team have created one—an automated DNA processor that does for DNA sequences what word processors do with text.
I049
The heart of his technique is a process he calls the Y-operation which takes two fragments of DNA and joins them together using a PCR-like process (the joining process can be represented by Y-shaped figure, hence the name). The power of this process is that it allows DNA fragments to be inserted, deleted and replaced--all the functions of a word processor--using various Y-operations in parallel or in sequence. “The Y-operation can be a foundation of DNA processing,” he told delegates at his FET09 keynote speech. The system even corrects errors, which is important because PCR not only copies DNA sequences but any errors that slip in too. Shapiro's method includes a powerful error-correcting mechanism that eliminates errors rather than propagating them. “Our method has built in error correction,” he says. The result is the DNA equivalent of a word processor. Anybody wanting to create a specific sequence of DNA simply sends Shapiro the instructions for making it, as well as the DNA molecules that need to be edited: the starting molecules and their sequences, the parts that must be edited together and the final sequence (Shapiro has created a programming language that researches can use to submit their requests). Shapiro then feeds these input molecules and the instructions into a robotic machine that he has designed which carries out the relevant operations, a machine that he clearly expects to become a standard piece of apparatus in labs around the world in a matter of years. “Most labs do this entire process by hand at the moment. Our goal is to eliminate this manual labour.” That's a goal that countless postdocs would wholeheartedly agree with.
CYBERNETICS WITH DNA
I050
Computation may soon take a very different path by mimicking how nature computes.
The possibilities of DNA computers go beyond our imagination. Source: Kennislink.nl
What is computing? For nearly half a century, this question has had a simple answer inspired by the vision of Alan Turing, John von Neumann and other founders of the digital revolution: computing is the process of running algorithms on devices (usually modern digital computers) capable of carrying out logic operations in any combination. The story of computing has mostly been the story of increasing speed and progressive miniaturization, and of computational devices spilling out into all other technologies, into the control of automobiles, telephones and television, into manufacturing, and the linked computers that created the Internet. But is this the only way to think about computing? Increasingly, many chemists, biologists and other scientists don't think so, and suggest that nature has long been doing computing of a rather different kind. The laws of physics, chemistry and biology, they note, "discover" states of efďŹ cient organization and function through myriad processes which aren't obviously computational; the universe doesn’t need to calculate. Light manages to ďŹ nd automatically the shortest route through complex materials, and the rich internal biochemistry of a bacterium responds intelligently and immediately to a viral infection. DNA fuels a molecular computer. Copyright Ehud Shapiro.
fet 09 Science beyond Fiction
All these processes are certainly computational in some sense, as they find solutions to problems as presented. Yet they work very differently from present day computational technology, and lack its emphasis on speed. The important question is whether we can learn to exploit these computational tricks for our own purposes, and a quick answer would seem to be yes, as an explosion of recent work shows. The key transforming technology is emerging especially from techniques for controlling and manipulating single molecules, and for constructing what is coming to be called ‘molecular cybernetics’. Double-stranded DNA may be the basis of life, but single-stranded DNA may turn out to be more promising for computing. In impressive work over the past few years, for example, Milan Stojanovich of Columbia University and colleagues have used it to design a number of simple logical gates based on chemical activity. They base their gates on a nucleic acid enzyme — a deoxyribozyme — which catalyses certain DNA reactions. By attaching to this enzyme so-called stem loops — short single-stranded oligonucleotides that bind and inhibit the enzyme activity — they can make this activity sensitive to the presence or absence of further strands of DNA that can disrupt the effects of the stem loops.
I051
This is a little complicated, but the result is a chemical logic in which the presence or absence of specific DNA strands represents logical inputs — 1 or 0 — and the enzyme being active or inactive (reflected in its ability to cleave a certain test oligonucleotide) gives the output of a gate, also 1 or 0, acting on those inputs. With fluorescent markers to detect such activity, the readout can show up directly in colours. Using various different stem loops, these researchers have managed to design a variety of logic gates including NOT, AND, OR and so on, and, by combining them, to devise automata capable of playing simple games, such as Tic-Tac-Toe As Stojanovich described in a session on Unconventional Computing, this is only a proof of principle, as his kind of computation will never compete with solid-state devices in terms of speed. Indeed, the Tic-Tac-Toe automaton takes about 30 minutes to make each of its moves, as the underlying DNA chemistry is quite slow. But computation isn’t only about speed, even if this is what we usually emphasize in our thinking about it. The potentially revolutionary aspect of this technology is that the computation works in solution, and could, for example, be used to carry out information processing in biological fluids.
I052
fet 09 Science beyond Fiction
Hence, it is easy to imagine future devices carrying out tasks considerably more complex than Tic-Tac-Toe, and doing so autonomously within living cells. There is clearly potential for the engineering of molecular control systems able to detect specific DNA sequences, for example, and to release specific drugs or molecules in response. Just as computation has spread rapidly into every corner of engineering control, we can expect the same kind of transformation of biology and medicine into fields dominated by control based on flexible chemical computation, and an intelligent and active chemistry that can gather molecular information and calculate delicate actions based on it. In this regard, one of the most exciting recent developments is the creation, also by Stojanovich and colleagues, of molecular ‘spiders’ — biomolecular systems with ‘legs’ made of single-stranded DNA segments having lengths of the order of 10 nm. These spiders can move over a surface covered with single-stranded DNA segments that are complementary to its legs, as they repeatedly bind, dissociate and bind again. The movement of such spiders in large numbers can be controlled by engineering the properties and geometry of the surface, as well as by the physical conditions influencing the statistics of the binding process.
I053
The possibilities for controlling spiders and other novel nanoobjects will only become richer with incredible objects like the DNA box reported just recently (Nature 459, 73-76 (7 May 2009). Using a technique known as DNA origami, Anderson et al. were able to make a 3D box a few tens of nm on a side with a lid which can be opened by presenting certain DNA keys. Our future, it seems, and especially our computational future, may well be written in DNA, just not of the same form as our biological past. Yet DNA-based computing represents only one small pathway into a huge world of unconventional means for computing. In a similar spirit, other researchers are exploring how some very simple physical processes such as chemical reactions, or the behaviour of bacteria might be made to do computations and solve problems. The world at many levels, it seems, is carrying out computations all the time, and we're only now finding the perspective to see them. As a result, even in this computational age, we may still be closer to the beginning of our understanding of computation than to the end of it.
SELF-POWER FOR THE NANOMACHINES Building energy scavenging machines to harvest energy from the environment.
The world is awash with heat, vibrations and light. If only we could build energy scavenging machines to harvest this energy and use it to power nanodevices.
Every eye in the room watches the drinking bird toy, forever dipping its head into a bowl of water and bobbing back and forth in a seemingly endless cycle of motion. The drinking bird gives a good impression of a perpetual motion machine but there's no magic here. The toy extracts all the energy it needs from the environment. It is powered by heat. The irony of the demonstration is clear to the audience who have gathered at FET09 to discuss energy harvesting on a much smaller scale. If it can be done with a child's toy, why not for nanomachines and nanocircuits? The drinking bird has charmed children and adults alike for 60 years but the watching scientists and engineers know that self-powered nanomachines are still only a dream. That may soon change thanks to the convergence of two trends in microelectronics. The first is the relentless creation of ever smaller, more energy efficient nanocircuits and nanomachines. The second is a newfound ability to harvest energy on the nanoscale. The hope is that one day soon, the energy harvest will match energy consumption and that, when this happens, a new generation of selfpowered nanodevices will suddenly become possible.
In an era when energy conservation dominates thinking on every level of society, this work has clear potential to reduce energy consumption. But despite the stakes, perhaps because of them, in the conference room there is an almost palpable sense of optimism. Not that there aren't hurdles to overcome. Energy not only has to be harvested but conditioned into a usable power source and, if necessary, stored. Nobody seriously doubts that this will be possible. The question is how best to do it. Start work on an engineering blind alley and you can waste valuable years and, worse, leave the way open for competitors to dominate the field. The decisions made now will be crucial in deciding who wins and loses this race. For the moment, the leader is Zhong-Ling Wang, an electrical engineer at the Georgia Institute of Technology in Atlanta who in 2006 unveiled an extraordinary device that looks more like a tiny hairbrush than the future of power generation. The “hairbrush” consists of an array of nanowires made of zinc oxide grown on a wafer of aluminium oxide. Zinc oxide is a piezoelectric
I054
fet 09 Science beyond Fiction
material: it generates an electric field when it is bent. So each wire can generate a few milliVolts. With enough of these wires packed together, the dream is that their combined output would be enough to power various kinds of miniature sensors and processors. Wang's device provides a clear yardstick against which to measure progress, points out Violeta Gràcia, at the Universitat Autònoma de Barcelona in Spain, who chaired the Self-powered Nanodevices session at FET09.
Micrometric trench in a multilayer material formed by copper, silicon oxide, silicon, obtained through focused ion beam (FIB) milling. Experiment: G. C. Gazzadi, S. Frabboni, S3 (INFM-CNR) Modena; Artwork: Lucia Covi. ©S3
Most of today’s wearable and portable devices are powered by batteries which make up a significant proportion of their size and weight. This fraction is likely to grow significantly as the devices become smaller. Battery technology has evolved very slowly compared to microelectronic evolution. The amount of data that can be stored on a hard disk has increased over 1500 times since 1990 while battery energy density has merely tripled. That makes energy harvesting an attractive option, even if the energy is simply used to recharge onboard batteries. One readily available source of power is kinetic energy, says Francesc Moll, an electrical engineer at the Universitat Politècnica de Catalunya in Spain. Wang's piezoelectric device is one approach but an increasing number of other ways to scavenge kinetic energy are emerging. One option is to build a capacitor and to use ambient vibrations to change the distance between the capacitor plates. If the voltage is constant across the plates as the distance between
I055
them increases, the amount of charge on the capacitor will increase too. This extra charge can then be siphoned off and pumped round a circuit. This kind of electrostatic generator can be easily miniaturised, says Moll. But the switching circuits that siphon off the charge must be carefully controlled to coincide with the vibrations and that could be hard. In addition, this kind of device requires an external power source to create the initial voltage across the capacitor. Zachary Davis at Danmarks Tekniske Universitet in Denmark has designed a device that works in just this way. It consists of an array of silicon springboards. Each is just a hundred nanometres long and vibrates due to Schott noise, the unavoidable vibration of any hot object. Each springboard forms a capacitor with an electrode above and the vibration causes the capacitor to swell with charge during each oscillation. If Davis can ensure that the springboards oscillate in synchrony and build an efficient switching circuit to siphon the charge at its peak, he could have a useful nanogenerator on his hands. Another option is to exploit vibrations using magnetic induction. Faraday's law states that moving a wire through a magnetic field generates a current. So the trick is to build a tiny conducting coil and allow a magnet on a spring to move inside it. Again, the idea is to build not a single generator but a huge array of them and to combine their output. Prototypes built using microelectromechanical (MEMS) techniques have produced in excess of 10 microWatts from a 64 Hz input frequency and a vibration amplitude of a 1000 micrometers.
I056
Tip of a Scanning tunneling microscope (STM), sharpened by focused ion beam milling down to an apex size of few tens of nanometers. Experiment: G. C. Gazzadi, V. Corradini, S3 (INFM-CNR) Modena; Artwork: Lucia Covi. ŠS3.
fet 09 Science beyond Fiction
I057
Of course, one crucial factor in these vibrating designs is that every springboard or spring must vibrate in unison so that their output combines. Were they to vibrate out of phase, then their output could interfere destructively, dramatically reducing the power output. Ensuring this synchrony is just another of the many challenges that engineers face.
He says that the technology can also get around one of the biggest problems with solar cells: the fact that they work only during daylight hours. By tuning the antennas to operate in the infrared region of the spectrum, which is emitted by objects throughout the day and night, these nanogenerators could produce power 24 hours a day.
Ambient light is another obvious source of energy and the kind of structures that can be built on the nanoscale provide a hugely efficient way of harvesting it, says Javier Alda at the Universidad Complutense de Madrid in Spain. He says that on the nanoscale, it is possible to build optical antennas which convert optical frequencies of light into electric currents in the same way that radio antennas convert radio frequency radiation. Obviously, the antennas have to be about the same size as the waves that they are designed to receive. And since visible light has a wavelength measured in nanometres, it has only recently become possible to construct them on this scale.
In addition, if the antennas are built on tiny springboards, they can also vibrate and enhance this vibration when exposed to light. And this vibration energy could also be harvested. "This combination has great potential," says Gabriel Abadal, at the Universitat Autònoma de Barcelona in Spain.
The potential benefits are huge. Conventional photovoltaic cells convert light into electricity with an efficiency of around 15 per cent, although various breakthroughs are pushing that ever upwards (and some exotic and hugely expensive material can reach 50 per cent efficiency). Optical antennas, on the other hand, can convert radiation into usable power with a maximum theoretical efficiency around 80 per cent. "That's a huge difference. This is a game changing technology," says Alda.
If these and other researchers succeed in building nanogenerators, they will make possible an entirely new generation of nanosensors, nanomachines and microprocessors that will never have to be plugged into a power source nor have their batteries changed. They may one day even rival the energy harvesting ability of the drinking bird.
STEERING EUROPE’S HIGH-TECH FUTURE Interview with Mário Campolargo, Director of Emerging Technologies & Infrastructures, Wolfgang Boch, Head of Unit, Future and Emerging Technologies – Proactive, and Ales Fiala, Head of Unit, Future and Emerging Technologies Open
Q: What do you feel the FET09 Conference has achieved? M.Campolargo: FET09, the first ever European Future & Emerging Technologies Conference has been a very exciting experience. I’ve been impressed by the extraordinary quality of the scientific content, the visionary debates involving prominent scientists and policymakers, and the sense of community among European researchers. With close to 800 delegates, the event has been a memorable success. I cannot imagine a better way of celebrating the 20 years of FET research! W.Boch: FET09 delivered fascinating and inspiring contributions of excellent quality and also the sharing of ideas beyond traditional boundaries and frontiers build up by scientific disciplines. FET09 showed that ICT is no longer an island but a hub that brings together different research and technological areas. FET09 inspired novel links and collaborations. It has shown that we should never stop looking beyond boundaries. A.Fiala: The hosting of FET09 in Prague is a witness to the enlargement of the European Union. FET09 has clearly shown that the new member states add precious fresh ideas to European research. Expanding the horizon of research geographically was undoubtedly one of the most important achievements of this conference. It has also shown that ICT
needs and is willing to expand beyond its traditional focus and willing to embrace not only new disciplines but also new modalities and challenges ranging from artistic expression, over social issues to ecology. Q: What is the role of FET research? What makes it different? A.Fiala: For me, I would like to highlight the unique capability of FET to rejuvenate ICT and inspire new research paths leading to truly disruptive innovations. FET thrives on new promising ideas, and its crucial role is to explore insights that can radically change our understanding of science and push the frontiers of ICT forward and outward. W.Boch: FET's most distinctive characteristics are multi-disciplinarity and transformative approaches. The interaction and cooperation between ICT and other sciences is a driving force that needs to be strengthened. It will continuously transform ICT and enable truly disruptive innovation. The strive for truly multi-disciplinary and transformative research has stimulated radically new research domains, for instance on how nature deals with complexity, to understand how we can harness the collective intelligence of societies, exploiting the control of atoms and the 'weirdness' of the quantum world to develop new quantum technologies that may make communication safer and promise immense computing power in the future.
I058
fet 09 Science beyond Fiction Q: Could you explain the Commission's reinforced policy for FET research? What can Europe do to improve its performance in FET-type research even further? M.Campolargo: Essentially, the question is how to create the best conditions to foster FET research across Europe and maximise its impact. This can only be achieved if we take actions to stimulate scientific curiosity, reward the stamina to see ideas moving from the drawing board to reality, and encourage the readiness to take calculated risks in science. We must convince public and private decision-makers that investing in curiosity-driven research is indispensable to nurture innovation in Europe in the long run. The Commission is committed to increase the current annual FET budget by 20% per year until 2013. We propose not only to strengthen the current FET schemes that have proven to be so successful, but also provide new opportunities and instruments to strengthen high-risk, foundational and transformative research in Europe. W.Boch: One way to foster FET type research is for the Member States and others to work together to develop joint research roadmaps and possibly launch joint calls in the FET domain. We already have the means to do this through instruments called ERANET and ERANET plus, in areas such as quantum information processing and communication, and neuro-informatics. These are promising starting points for boosting synergies between FET and national research programmes. One of the ambitious new avenues set out in the FET communication will be Europe-wide FET Flagship Initiatives to tackle the scientific challenges of the 21st century. In the last few days we have seen already some initial indications of possible grand challenges that could be addressed by FET flagships. They call for a sustained effort at several levels to reach a critical mass of resources that matches the ambition of the goals set. A.Fiala: Talented scientists and engineers, and in particular the young ones, represent enormous potential for Europe. We must create the conditions to attract and retain these thinkers in Europe and enable
I059
them to use their creativity and imagination to address the research challenges of tomorrow. FET is a means to retain talented researchers and empower them to engage in multidisciplinary endeavours. FET is instrumental in setting a more favourable environment for the scientific leaders of tomorrow. We need to work together with Members States to seed the ground for future FET type Research in Europe. Q: Should Europe be working with global partners in these areas? M.Campolargo: : I am a convinced supporter of international cooperation particularly in such areas as FET. We need to foster global cooperation to address global problems and make Europe an attractive place to engage the best talents from all over the world. FET should engage in research collaborations with both our partners from the developed economies - such as the US, Japan and Russia – and with new partners from the emerging economies- such as China and India – in areas where there is a clear added value for pooling resources and concentrating excellence. Discussions with these countries are already underway at the very highest levels. Beyond the bottom-up ad-hoc collaborations we should develop strategic partnerships and identify common priority areas of mutual interest and benefits, and also address specific topics and challenges that require a truly global effort. Q: Finally, can you summarise the ambition and future for FET research in one line? M.Campolargo: Building on the success of FET to achieve a Europe-wide commitment of all research actors for visionary high-risk research. W.Boch: : I see FET’s role and ambition in spearheading Europe’s future ICT technologies, by bringing together the best scientists from Europe and around the World. It will not only deliver new scientific foundations for future ICTs, but seed as well the European ecosystem for innovation and support a sustainable European society. A.Fiala: FET is as much about ICT that shapes our world as it is about ICT that is shaped by the world. It is about ICT that reflects on its role in science and society.
ST
THE 21 CENTURY SCIENTIST
The profound impact of ICT on the way scientists work.
It’s impossible to be a scientist today without knowing about and using ICT. Whether undertaking trials for a new drug, developing novel nano-materials, modelling climate change or environmental processes, researchers rely on advanced ICT systems, often massively distributed around the world. Already, ICT is central to the way science is done in the 21st century and it is set to become even more so in the future. One major driver is in communications: increasing bandwidth within the Internet and other networks allows data to be moved around more quickly than ever before. Raw data from laboratories or scientific instruments can be downloaded and stored many miles away and shared with research teams around the globe. A second driver is in storage itself, with continual decreases in the costs per byte making possible the creation of enormous data archives. Major programmes, such as the Hubble Space telescope, the Large Hadron Collider and genomics efforts generate peta-bytes (1015 bytes) of data per year. Finally, advances in processors, with ever-faster chips, allow these huge amounts of data to be processed, analysed and visualised to create new scientific knowledge at unprecedented rates.
THREE REVOLUTIONS IN MODERN SCIENCE Having now reached a critical mass, this growth in computing power, knowledge and data is not just affecting how science is practised today but also driving its future evolution, according to Prof. Henry Markram of the EPFL, Switzerland. He sees three key effects from this. Firstly, science is being organised on an industrial scale, because it is more efficient to generate the data in large-scale facilities or networks of facilities than in individual labs. “Across all of the sciences and medicine the gathering of scientific data is being accelerated through automation of measurement, recording, sequencing, and imaging technologies enabled by ICT”, explains Prof. Markram . Furthermore, this is driving future ICT to provide more effective and targeted ICT solutions to store and process information. The second trend Markram identifies is that the ability to organise and store huge amounts of data allows us to analyse it in new ways. This is leading to a new, correlation-based science where relationships between data are used to make predictions and intelligent decisions based on global knowledge. “We are learning to correlate everything with everything else”, says Markram. “This ability will become core as we archive more and more data”. The resulting data deluge containing all of life’s, society’s, and a person’s innermost knowledge, poses enormous challenges for ICT, the public and for the scientific process. Novel information processing systems, such as automated problem posing/solving, are needed for the public and scientists to effectively swim what Markram calls “the growing knowledge pyramid”. This links to a third revolution in 21st Century science: being able to capture the data and knowledge in simulations. “Ultimately science will be able to gather, organise and embody all data, knowledge and human understanding in computer models”, explains Prof. Markram, “This will allow us to achieve the ultimate form of information processing - simulation-based research”. Simulation-based research promises to provide a new platform for science, innovation and commercialisation. In the life sciences and medical fields, we will see the building of the first detailed models of the human brain and body that will bring personalised medicine, accelerated understanding of the aggregated knowledge the world has accumulated, and radical new technologies that will transform the industries of the future. This revolution will also impact society and politics as we learn to grapple with these powerful new technological capabilities and balance them with the social and ethical needs of society.
I060
fet 09 Science beyond Fiction
A perfect storm According to Markram, the “perfect storm” arising from these three factors amounts to the greatest revolution in the scientific method since the Renaissance. “Scientists are in for a rude awakening”, he says. “As humans we can’t help but think linearly, undertaking repetitive experiments, etc. which are repeated elsewhere”. The industrialisation of science means robots are increasingly able to do this at least as effectively as humans, so changing the nature and meaning of being a scientist. “Oxford-style scientists walking in the garden tend to resist the industrialisation”, Markram observes. “But the next generation of scientists is going to have to be very adept at searching the wave of data and knowledge.” Science is becoming, more than ever, a communal activity. It is also becoming very much more public. In common with many other areas of life, science must embrace the ‘Web 2.0’ phenomenon, which emphasizes openness, user involvement and the network effect. The public are now able to participate in science more closely than ever before, undertaking experiments, contributing observations, and helping to analyse and interpret data. This approach is already established in the astronomy community, where the general public has been enlisted to help analyse galaxies based on their shape, a task that is still done much more effectively by humans than computers. Other examples of mass participation are effectively number-crunching exercises, where users donate spare processor cycles on their computers to analyse raw data. One such is the SETI project searching for extraterrestrial life. Similar approaches have been used in mathematics in the search for prime numbers and in environmental sciences for running long-term climate models.
I061
The ethical revolution This ‘new science’ enabled by ICT poses huge ethical challenges, Markram believes. For example, within a few decades it could be feasible to extend the human lifetime several times over. Should we do it? How should we control the research? Similar ethical challenges arise in areas such as truly personalised medicine, the collection and use of intimate biometric and clinical information, and the inevitable fusion of the neuro- cogno, biosciences and ICT. “We must separate the science, the search for the truth, from the applications, what we do with that truth”, says Markram. The revolution we’ve not yet started – and the one 21st century science urgently needs – is in our ethics.
Henry Markram
THE DIGITAL FLYWHEEL Multi-disciplinary ICT-led research will be key to success in a 2030 timeframe. Information and communication technology (ICT) is so essential to our lives today that we tend to take it for granted. We love our mobile phones, our games consoles, our satellite navigation systems, our MP3 players, and much else besides. But have you ever stopped to think where all this technology came from and where it’s all going? One of the most remarkable achievements of the 20th century was the consistent growth in the power of computers. It’s well-known that, according to the so-called ‘Moore’s Law’, the computation capacity of chips doubles every 12-18 months. This doesn’t sound much, but over an extended period the growth is significant – equivalent to a 1000-fold increase each decade. Since the start of the computer era 60 years ago, the number of floating point operations per second that can be executed by a computer has increased by a billion-billion. Similarly impressive advances have been seen in other areas of ICT, such as the bandwidth of the Internet and other networks, and the capacity of digital storage. The result has been to make digital technologies the flywheel of modern economies. ICT is a major economic sector in its own right, accounting for 7% of the world’s GDP. More importantly, it is a motor for growth, driving innovation and progress in other economic sectors and across society. From manufacturing and logistics, to healthcare and education, there is not a single sector that is not touched by ICT and changed by it. Towards a singularity? It’s tempting to think that now we all have computers, mobile phones and Internet access, there’s not much further ICT can take us. This view could not be more wrong. In fact, the pace of change is accelerating, with profound implications for how we live our lives. “It’s difficult for humans to comprehend the cumulative effects of technological change”, says Prof. Henry Markram, a brain researcher
at EPFL, Lausanne. “The advances in science over the next 10 years will be equivalent to those of the last hundred years. And over the next one hundred years – in other words within the lifetime of a baby born in 2009 – the advances will equate to those of the last 10,000 years. Humans had only just started to create settlements then!” Markram’s view echoes futurologist Ray Kurzweil’s much-debated 2001 essay, The Law of Accelerating Returns. Kurzweil argued that change was occurring at such a rate that within a few decades machine intelligence will surpass human intelligence, leading to ‘the singularity’ – technological change so rapid and profound it represents a rupture in the fabric of human history. By 2020, Markram points out, scientists expect to have sequenced the genomes of around 700,000 species; by 2030 they are likely to have sequenced all known species on Earth. This will unlock a deluge of data and knowledge. Similar advances will be made in other areas of science, most significantly, perhaps, in our ability to understand, simulate and interact with the human brain. “I, and many other scientists, believe it’s possible to get to a functional model of the brain within ten years”, Prof. Markram explains. “Within 20-40 years it will be possible to integrate personal information into such models to get personalised simulations. We will be able to simulate diseases and treatments, and to couple the brain to robotics”. Equally breath-taking advances can be envisioned in other areas too, such as product design and drug research, thanks to simulation. “A simulation facility becomes as real as we have data for”, adds Markram. Through the crystal ball The impacts of such advances are likely to be profound and far reaching, but there is nothing deterministic about this. We can –
I062
fet 09 Science beyond Fiction and must - help shape the outcomes rather than just hitching a ride. Alongside the scientific and technological research, we need to contemplate our own future through exercises such as foresight studies and scenario building. One such exercise is being undertaken by COST, a European collaboration framework for researchers in the field of science and technology. Its Foresight 2030 initiative is looking at the effects of the Digital Revolution on the world up to and beyond 2030, based on a series of workshops involving researchers, industrialists, and policy-makers. “The technological progress in the next 21 years is likely to be exponentially larger than that experienced in the last 21 years”, explains Afonso Ferreira, COST Head of Science Operations. “If we note that 21 years ago the Internet was in its infancy, the web was embryonic, and the GSM phones did not even exist, one may start wondering what mind-boggling technologies are going to be available in 2030.” Some key themes are already emerging from the initial COST brainstorming sessions and discussions. For instance, experts envision a widespread realisation of e-Health. Tiny sensors worn next to the skin or implanted in the body will help monitor an individual’s state of health continuously and more accurately. New implants could replace worn-out body parts or they could be equipped with augmented capabilities, such as extra strength in a bionic limb or embedded intelligence to complement decisionmaking. Medicines could be customised based on a person’s DNA, matching an individual’s genetic information to a drug’s therapeutic properties. Ultimately, patients will be able to consult with virtual e-doctors as if they were human beings. Other possibilities identified by COST include: ultra-realistic virtual environments that produce ‘real’ feeling experiences (generally known as ‘presence’ technologies); huge global networks of sensors and smart objects, leveraging on semantics to harness the world’s
I063
data and knowledge; and systems able to automatically translate from one language to another, so allowing multi-lingual and multicultural interaction. The future is multi-disciplinary While such foresight exercises involve a certain amount of speculation, it’s clear that future advances will be beyond the boundaries of ‘pure ICT’. Multi-disciplinarity is already a key feature of research in ICT and is set to deepen in the years to come. Developments such as quantum information technologies, advanced robotics, presence, and bio-inspired computing all involve work at the intersection between scientific disciplines. Moreover, the scientific method itself is increasingly shaped by ICT in a variety of ways (see page 60). Hence, a capability in multi-disciplinary ICT-led research is of fundamental importance for European science and for the future of Europe’s economy and society as a whole. Not surprisingly, then, multi-disciplinary research is an established feature of the European Union’s multi-year programme for ICT Research. Started 20 years ago, the Future and Emerging Technologies (FET) action has yielded a number of successes (see page 66). Recognising the importance of this approach, the European Commission has recently come forward with policy proposals designed to strengthen Europe’s commitment to visionary, highrisk ICT research (see page 70). These include significant budget increases, as well as new initiatives to strengthen foundational and transformative research in Europe. Such a commitment cannot come too soon. For us in Europe, and the world, facing up to the challenges of the 21st century – the financial crisis, climate change, rising population, sustainable development, alternative energy, improving welfare, etc. – will require ingenious solutions. We’re going to need all the momentum the digital flywheel can give us.
I064
IS INNOVATION SUSTAINABLE? Has our dependency on innovation become a vicious cycle?
In The Innovator’s Dilemma, Clayton Christensen observed the potential of certain disruptive technologies to improve a product or service in ways that the market does not expect. These disruptive innovations present a major problem for existing incumbents, who are faced with the challenge – the dilemma of the title – of having to manage both incremental, sustaining innovations and new disruptive aspects. Few technologies are intrinsically disruptive or sustaining in character, Christensen noted. It is the strategy or business model the technology enables that creates the disruptive impact. The prosperity enjoyed by modern economies is the result of innovation which allows us to live more comfortably and achieve higher standards of living. On the other hand, in continually creating new goods and services, innovation itself uses precious resources, creates pollution and waste, and imposes increasing demands on the physical environment. So there is another dilemma here, what we might call ‘the sustainability dilemma’: how to innovate in ways that make our societies more sustainable. “We’ve reached a stage where innovation does not sustain our societies but instead threatens them”, argues Sander van der Leeuw
of the Santa Fe Institute in the United States. “We have to find a way out of this dilemma”, he continues. “We need to take a step back and ask what is all this innovation for? Do we really need all this new ‘stuff’?” A short history of innovation Innovation – in the sense of developing new ideas within a market system – is only a few hundred years old. “Until the 1800’s innovation was demand driven”, says Van der Leeuw. “it was a matter of finding a use for a brilliant idea. Nowadays innovation is supply driven: we try to find ways to adapt society to the brilliant ideas developed by the research system.” According to Van der Leeuw, this has resulted in society becoming ‘innovation dependent’. Whereas in the past innovation tended to be seen as a bad thing – since it disturbed the natural order – now we see it as the ultimate ‘good’. We invest in innovation for its own sake, not knowing how it works, what it will do, or where it will take us. “There is a lot of waste in the present system. We’re all becoming innovation junkies”, he says. “This endemic ‘wild’ innovation is threatening sustainability. There has to be another way.”
fet 09 Science beyond Fiction
The information society as a complex system Van der Leeuw is part of an emerging school of thought in innovation studies that sees innovation as a complex system. “We’ve become an Innovation Society”, says David Lane of the University of Modena, a chief proponent of the complex systems approach. In Lane’s view, our whole socio-economic system has become a bit like a pyramid selling scheme: we are fixated on generating goods and services for their own sake with nothing to back it up. “You have to ask whether the innovation society as currently organised is a giant ‘Ponzi scheme’”, says Lane provocatively. In common with Christensen, Lane emphasizes the role of disruptive innovations which lead to what he calls ‘innovation cascades’. “With a disruptive innovation one thing leads to another, and then another. Before long we have effects that ripple throughout society in ways we never expected. It’s completely non-linear. So as well as direct causations, we get emergent behaviours – this is where the complexity shows itself. Just look at the book: this transformed Renaissance Europe in ways that could never have been foreseen.” The book’s equivalent today, as a disruptive, general purpose technology is ICT. “Information is not subject to the conservation principle”, explains David Lane. “Societies are structured by information processing, so the unbounded information made available by ICT represents a major stimulus to innovation.” He led a four-year FET project called ISCOM which studied the information society from a complex systems perspective.
I065
Emergent properties in innovation One example of this approach is in the work of Denise Pumain and colleagues at the University of Paris, who have used complex systems theory to model innovation in urban systems. “As urban system are complex systems, policies for sustainable development can take advantage of spontaneous trends in urban dynamics or try to counteract them”, she explains. Pumain treats a city as a system composed of multiple interacting intelligent agents, an approach known as a multi-agent system and well known in complex system research. Using models adapted from social science, the team has modelled future growth patterns and aim to predict places where the next waves of urban innovation and development are likely to emerge. For Europe, the models have been calibrated against observed patterns of growth. Pumain’s team now aims to apply them to Chinese cities, where dynamics are much more difficult to predict. According to Sander Van der Leeuw, it’s only by viewing innovation as a complex system – a system of multiple interacting agents – that we can hope to address the sustainability issue. “We have to organise ourselves differently and involve people”, he says. “The general population has lost touch with advanced technology. It’s invisible and they have no intuitive grasp”. As a result he sees a growing power disconnect between the technocracy and society at large. The answer, he says, is to include both the societal and environmental dynamics in our models of the future and involve the community from the outset. “We must educate scientists about society, and society about science!”
TWENTY YEARS PUSHING THE HORIZON
FET is a pathfinder for new research topics, placing Europe in a world-leading position in emerging fields.
Research is like exploration. Sometimes it is sufficient to follow the contours, sure in the knowledge that the goal being sought lies in familiar territory; and sometimes it is clear that there’s a long journey ahead and the best way of achieving the goal is to strike out for the far horizon. In ICT that horizon is broadening all the time, as scientific knowledge and technological capabilities expand at accelerating rates.
The European Framework Programmes have a strong tradition of support for long-term, high-risk research in ICT. Such actions are now known as Future and Emerging Technologies (FET) research. For the last twenty years FET-type actions have provided a pathfinder role, promoting a visionary and exploratory perspective to ICT research so as to stimulate new ideas and new research activities. FET undertakes research of a longer term nature or involving particularly high risks in areas where there is the potential for major advances and significant industrial or societal impact. It explores new frontiers, opening up emerging research opportunities and laying the foundations for future research programmes. “FET is about challenging current thinking”, explains Aleš Fiala, Head of Unit FET-Open at the European Commission. “We’re not surprised if things fail. Failure is part of the adventure”. FET began life in 1989, as an action called Basic Research under thethen Esprit programme. In 1995, under Framework Programme 4 (FP4), it was renamed Long-Term Research. These early actions aimed to lay the foundations for ‘next wave’ technologies to underpin the future development of European information technology R&D. The schemes were open and responsive, and focused on community building and developing skills and infrastructure as well as on research. Many of the activities launched under these actions have since passed into the ‘mainstream’ R&D under the subsequent Framework programmes. For instance the work on nanotechnology started in the
I066
fet 09 Science beyond Fiction
LANDMARKS IN 20 YEARS OF FET • 1989 – Basic Research action launched within Esprit, FP2 • 1995 – Long-Term Research (LTR) action launched within Esprit, FP4, including the Open and Proactive schemes • 1999 – Future and Emerging Technologies (FET) action launched within IST Programme, FP5 • 2003 – Open scheme opens to continuous submissions • 2007 – FET schemes continued under FP7
Khalil Rouhana: "FET is about crazy but valid ideas"
mid-1990s under Esprit was later diffused into industrially-oriented R&D on components in microelectronics and microsystems, and more recently into research on nano-electronics and systems. Other examples where early work has since been taken up more broadly are distributed systems and networks, computer vision, intelligent interfaces, language and speech technology, and quantum cryptography. Over recent years the FET action has been divided into two streams: FET-Open and FET-Proactive. FET-Open takes a bottom-up approach. Unconstrained by established approaches, scientists are free to send in their proposals at any time and on any topic. FET-Proactive operates in a top-down manner, focusing resources in topics with important potential through strategic initiatives oriented on long-term goals. The benefits for the economy and society as a whole from such publicly-funded research are now widely recognised. Visionary, long-term and high-risk research underpin industrial innovation in many science-based industries and help raise business productivity. Studies show that the private rate of return to public-funded research – in other words the ‘profits’ for the economy from this type of public investment - ranges between 20-60 per cent. Including more general societal returns would push the figure higher still. Similarly, results from publicly-funded basic research have also been shown to be
I067
strongly reflected in patents. A wealth of evidence, therefore, shows that the case for publicly-funded long-term research is a strong one. FET’s success is measured in the build-up of knowledge, the development of competences, the exploration of new research avenues, and the maturing of new science and technologies. The impact of this research work is visible at many levels. The scientific and technological achievements produced by the projects are widely publicised, including in the general press. Several FET initiatives have succeeded in placing Europe at a world-leading position in emerging research domains, for example in nanotechnology, quantum computing, and information interfaces. FET has also helped create multi-disciplinary pan-European communities of scientists and researchers that drive progress in emerging fields. “Multidisciplinarity is at the heart of FET”, explains Wolfgang Boch, Head of Unit FET-Proactive, European Commission. “We’re aiming to bring new science into technology but also to identify areas where technological change and social change are evolving together. We try to anticipate where communities are forming and what the topical areas are to invest in.” While FET’s research horizon is long-term and most research work is carried out at universities and research institutes. FET is particularly attractive to SMEs that wish to collaborate and tackle issues that fall outside of mainstream research agendas. Many spin-off companies have been created as a result of such work. Recent policy proposals by the Commission aim to boost Europe’s commitment to high-risk, transformative research in ICT (see page 70). A significant increase in budgets for existing FET-type actions is foreseen, as well as a number of new activities. With these commitments building on top of such strong foundations, FET can look forward with confidence to the next 20 years.
FLYING THE FLAG FOR HIGH-RISK RESEARCH The European Commission's proposal for FET Flagships in emerging fields of ICT.
Over the last 20 years Europe’s investment in high-risk, visionary research in information and communication technologies has yielded significant results. Research under the European Future & Emerging Technologies (FET) scheme has opened up new fields of study, fostered excellence in science and engineering, seeded innovation, and helped shape research agendas at national and industry levels. Despite these successes, across Europe as a whole goal-driven research remains fragmented. With the pace of ICT moving ever faster, Europe needs to do more to bolster high-risk ICT research for the ‘day after tomorrow’. At the FET09 Conference in Prague, Commissioner Viviane Reding announced FET Flagships as the cornerstone of an exciting new longterm ICT research policy for Europe. These Flagships will be ambitious Europe-wide, goal-driven initiatives aiming at major scientific discoveries and technological innovations and requiring strong interdisciplinary research. They should be in areas where Europe has a strong foundational research base. Given the scale of these initiatives, the objectives will only be achieved by a joint effort between the European Union, its member states, and where appropriate, by including industrial and international partners. In particular, they can have a huge leveraging effect when the efforts are coordinated with funding bodies in member states. Introducing a workshop session to discuss these new initiatives, Michel Cosnard, of INRIA, said it was clear there were many challenges in ICT and we could no longer rely only on small projects. “We have seen the same in physics over many years, but in ICT this scaling
of research projects has been missing”, Mr Cosnard observed. The FET Flagships would aim at creating world-class centres of excellence, and establish Europe as the leader for driving innovation in key fields, while also increasing the return on investment in high-risk targeted research. For instance, a Flagship could model and run large-scale simulations in order to understand the way nature processes information and to apply this knowledge to develop future biocomputers. Such a unique endeavour would attract the best computer scientists, biologists and physicists from Europe and beyond. Other examples could include: using ICT for modelling global change and complex systems; technologies for future computing devices looking beyond the silicon era; new approaches for solving problems, especially ones involving huge amounts of data; and ‘living companions’, drawing together many different strands in robotics-related research. While the “centre of gravity” should be in ICT, Mr Cosnard noted, we need other sciences too. “We are not fighting cancer or global warming, but rather developing the tools and other means to address such challenges.” At this stage these are just examples to illustrate the type of challenges Flagship initiatives could tackle. The precise topics have yet to be decided and will be the focus for debate within Europe’s scientific community over the coming months.
I068
fet 09 Science beyond Fiction
Alex Vespignani, of the Indiana University School of Informatics, said science projects such as the Hubble telescope and the Large Hadron Collider (LHC) had captured the public’s imagination and ICT needed to emulate these successes. Henry Markram, of the Ecole Polytechnique Fédérale de Lausanne, noted that science will change tremendously in the 21st Century, requiring massive interdisciplinarity. “We need a unified, coherent, integrated strategy”, Prof. Markram continued, “innovative but with good seeds so as to lower the risk”. Wolfgang Boch of the European Commission emphasized that the Flagships proposal was the result of two years’ internal reflection by the Commission and the advisory group of the ICT programme. What had been put forward so far was the concept, and the processes had yet to be defined. The present Framework Programme created boundaries, in terms of research themes and project length. As mini programme, the Flagship presents an opportunity to break out of these constraints Discussion at the workshop focused on three key aspects. The first was the nature of the Flagships: their visions and goals, and whether they would/should be ‘top-down’ or ‘bottom-up’. A number of participants were concerned about the scale of the initiatives, given the problems often encountered with large programmes. Goals can change; multiple foci may develop as a programme matures; and actors can become territorial. Good communication will be essential in consolidating the vision and building a community with coherent goals. There is an inevitable tension between clear, structured objectives and having the flexibility to respond to results and changing circumstances as they arise. Pure science-based projects, such as the Large Hadron Collider at CERN, have very clear goals, so it is easy to see whether they have succeeded or not. In ICT, discrete goals may not be possible. Hence, the Flagships should be thought
I069
of as more of a framework. We will have to allow some ‘fuzziness’ and be prepared to adapt the goals along the way. Some elements are likely to be top-down and others bottom-up. Shared visions would be more important than goals, perhaps allowing for multiple goals. This, in turn, will determine the timeline. This led to a second key concern: organisation. Should the Flagships be located in a particular institution? How should they relate to other initiatives such as the EIT’s Knowledge & Innovation Communities (KICs)? It was agreed that location is unlikely to be a significant factor. While there should be some aggregation points, the essence of the Flagships should be extensive networks within Europe and beyond. Unlike pure science projects, Flagships are unlikely to require large centralised facilities. Rather we need convincing arguments for engaging stakeholders – researchers, member states, industry - and building communities around the chosen themes. The KICs could be one such vehicle, although as yet these are at a very early stage. The final discussion area was the relationship to existing FET structures. Here it was emphasized that the Flagships would be an addition and not a replacement. FET-Open and FET-proactive schemes would remain and continue to evolve. Closing the workshop, Mr Boch noted that this is an evolving process which was only just beginning. Full implementation could not be expected before the next Framework Programme, from 2014 onwards. In the meantime, preparatory actions could be started so that by 2013 two to three candidate initiatives were up and running, even if they were not in full gear.
I070
MOVING BEYOND FICTION How to push for visionary, high-risk research in ICT?
It’s clear Europe has a lot to gain from visionary, high-risk research in ICT. But how should we go about it? What should be the priorities? And how can we improve on what’s been achieved already? The European Commission has been thinking about these issues and has come forward with a wide-ranging plan to boost this kind of visionary research. "Europe must be inventive and bold – especially in times of crisis”, said Viviane Reding, Commissioner for Information Society and Media, announcing the proposals at FET09. “Research seeds innovation which is key for Europe's long-term global competitiveness. Scientific and revolutionary breakthroughs constitute enormous opportunities and we must bring the best brains together to make the most of them." A new era for Europe’s high-risk research The Commission is calling on EU governments to catch up with the US, China and Japan by doubling their investment in high-risk research in ICT by 2015. It also wants to better join up research efforts between national and European programmes. For its part, in 2010 the Commission will start increasing its annual spending on research for future information technologies from €100 million to €170 million by 2013.
One key element of the plan is a proposal to create new Flagship research initiatives that can drive large and sustained effort of several hundred millions of euro. The Commission aims to launch at least two such Flagship initiatives by 2013, bringing together efforts across borders and scientific disciplines to achieve research breakthroughs – the development of biocomputers, for example. The Commission also proposes to help talented young researchers engage in high-risk research and support research-intensive hightech small and medium-sized enterprises (SMEs) that can turn early research results into new business opportunities. Problems and pitfalls This type of research isn’t all plain sailing, however, either for governments or researchers. Firstly, there’s what we could call ‘the interdisciplinary dilemma’. Science values quality and excellence and researchers can find it difficult to achieve recognition in new and emerging fields. For young scientists especially, there is a risk of being left in an interdisciplinary limbo, where they are not taken seriously in any scientific discipline. Hence, opting for the new over the mainstream can carry considerable personal risks. For governments and funding agencies, the central question is how to manage this type of research activity. Interdisciplinarity inevitably
1
[article FETMS26 Flying the Flag for High-risk Research]
fet 09 Science beyond Fiction
I071
VISIONARY ICT RESEARCH IN JAPAN
requires teams, often spread across many institutions and in some cases different countries or even continents. Such teams have to be managed in a way that meets overall objectives but also allows individual scientists the freedom to experiment and develop their own lines of enquiry. Thirdly, there is the issue of power bases. There is a natural conservatism in science, which does not sit well with the ‘out of the box’ thinking necessary in emerging fields. The ‘established order’ generally pulls the purse strings, making it difficult for interdisciplinary teams to get funding. What is needed is an environment where interdisciplinarity can thrive. We have to break down barriers between disciplines but we cannot and should not eliminate the disciplines themselves. It is a matter of creating ecosystems where the different disciplines can co-exist and interact. Industry, SMEs and users are an important part of such ecosystems, even if the applications concerned are very far advanced. We also have to change cultures within science itself, so that ecosystems that span disciplines are respected and encouraged. With more investment and cooperation in high-risk research on future information technologies, Europe can lead the way in turning bright research ideas into future technologies.
Japan has often been a beacon for high-risk, visionary research, but is it able to sustain the support for such technologies? It is increasingly difficult, according to Hiroshi Nagano, Principal Fellow of Japan Science and Technology Agency (JST). “In Japan, 70% of research is undertaken by industry, and they are gradually taking a rather shortterm view”, Prof. Nagano explains. To overcome this, a programme was launched, which will be co-funded over ten years, with industry’s contribution starting off small and then ramping up after three years once results start to come through. “But in order to counter this seriously, the government’s next Science & Technology Plan will give greater emphasis on multi-disciplinary, transformative research”. A set of societal challenges are at the heart of the Japanese approach. These include aspects such as more sustainable development, improving healthcare, and security. Having defined these, scientists were consulted on the types of multi-disciplinary research necessary to address them. “It’s important to put the challenges first”, notes Prof. Nagano, “then everything else will follow.” Attracting young scientists is another priority. “Young scientists can be wary of this type of research because they don’t know how it will be evaluated and whether they will get recognition later on”, says Prof. Nagano. The programmes have to take steps to ensure attracting the best researchers of the future.
Based on interview with Hiroshi Nagano, Professor for Science and Technology Policy, National Graduate Institute for Policy Studies, Japan; and Principal Fellow, Japan Science and Technology Agency (JST)
PROGRAMME Keynote
Henry Markram, Ecole Polytechnique Fédérale Lausanne, Switzerland "Shaping Science & Society of the 21st Century" Welcome and Opening of the exhibition Mirek Topolánek, Prime Minister of the Czech Republic Vivane Reding, Commissioner for Information Society and Media
I072
Parallel Sessions The ultimate robot Aaron Sloman - Owen Holland - Tomohiro Shibata Tom Ziemke Organisers: Chris Melhuish - Alois Knoll Music and the brain Philip Ball (organiser) - Antonio Cimurri - Stefan Koelsch - Jason Warren FET Flagships: big Goals, big Challenges, big Projects Silvano Cincotti (organiser) - Herbert Dawid Kaan Erkan - Mike Holcombe
Reflective computing Nikola Serbedzija (organiser) - Joyce Westerink Martin Wirsing - Stephen Fairclough Single atom functionality in electronic devices Thomas Ihn - Jan Van Ruitenbeek - Klaus Ensslin Silvano De Franceschi - Marc Sanquer (organiser) Toward agent-based based technologies for innovative economic policy design: how and why Ben Tatler - Miklós Kiss - Rolf Coulanges, Organiser: Erhardt Barth, University of Lübeck, Germany
Highlights of Future and Emerging Technologies Moderation by Antti Peltomäki Introduction by Wolfgang Boch & Ales Fiala Paolo Dario - Christoph Guger - David Lane Panel discussion The value of multidisciplinary transformative research for future Information and Communication Technologies Torsten Wiesel - Ivan M. Havel - Hiroshi Nagano - Michael Oborne - Dieter Fellner - Khalil Rouhana - Moderator: Clive Cookson Keynote
Anton Zeilinger, University of Vienna, Austria "Quantum Information: The New Frontier" Panel discussion The way forward for strengthening multi-disciplinary research for future Information and Communication Technologies in Europe Michel Cosnard - Qian Depei - Mário Campolargo - Jeannette Wing - Jiří Drahoš - Moderator: Wolfgang Wahlster
Keynote
Ehud Shapiro, Weizmann Institute of Science, Rehovot, Israel "A Word-processor for DNA"
Quantum information technologies Rainer Blatt - Philippe Grangier - Artur Ekert Tommaso Calarco Organisers: Vladimír Bužek - Elisabeth Giacobino Embodied intelligence Rolf Pfeifer - Paolo Dario - Kenji Suzuki - Eugenio Guglielmelli - Chiara Bartolozzi - Lijin Aryananda Alin Albu-Schaeffer - Frédéric Boyer Organiser: Cecilia Laschi Complexity Perspectives on Innovation: Theory, Models, Policy David Lane - Margherita Russo - Sander van der Leeuw - Denise Pumain Bridging the gap between the brain and the machine: new challenge for art and Science Alexander Ya. Kaplan - Olga Jafarova - Janez Jansa - Anders Sandberg - Reinhold Scherer Pavel Smetana (organiser)
Henrik Ehrsson, Karolinska Institute, Sweden "Two legs, two arms, one head. Who am I?"
Visions: some key challenges for pervasive adaptation Alois Ferscha - Michel Riguidel -Jeremy Pitt Ben Paechter Organiser: Jennifer Willies, Edinburgh Napier University, UK
Jeannette Wing, National Science Foundation, USA "Computational Thinking and Thinking about Computing"
Modelling and guiding attention in an increasingly complex world Dario Floreano
The frontiers of algorithmic complexity: classical vs quantum Mario Rasetti (organiser) - Gregory Chaitin - Paul Vitaly, Striving for realism in virtual worlds: sensation, perception, technology and the auditory brain Isabelle Viaud-Delmon (organiser) - Peter Brugger Olivier Warusfel Open source science Maurizio Marchese - Gloria Origgi - Stefan Tai Roberto Casati Organiser: Fabio Casati, University of trento, Italy Collective social phenomena in techno-socio networks Arvdi Kappas - Mike Thelwall - Beatrice de Gelder Paul Lukowicz - Janusz Holyst (organiser) Collective Robotics: adaptivity, co-evolution, robot societies Paul Levi - Dario Floreano - Alan Winfied Serge Kernbach (organiser) COST Science Café: What will your life be like in 2030? COST, European Science Foundation, Brussels Moderated by Professor Soulla Louca, University of Nicosia, Cyprus
Keynote
Alain Berthoz, Collège de France, Paris "New frontiers between IST, robotics and Cognitive Neuroscience" Albert-László Barabási, Center for Complex Network Research at Northeastern and Department of Medicine, Harvard Medical School "From Networks to Human Mobility Patterns" Parallel Sessions Envisaging self-powered nanodevices. Help us make it happen. Gabriel Abadal - Zachary Davis - Javier Alda Francesc Moll Organiser: Violeta Gràcia, Universitat Autònoma de Barcelona, Spain
Aesthetics as the heart of science Paul Bourgigne (organiser)- Jean Petitot - Louis Bec
Unconventional computing Andrew Adamatzky - Milan Stojanovic
Bodily intelligent modular robots Kasper Støy (organiser) - Peter Aerts - Andre Seyfarth - Rolf Pfeifer
Trust and security interrelationship Yoram Ofek - Alessandro Zorat (organiser) Amir Herzberg - Bart Preneel - Antonio Mana AhmadReza Sadeghi - Paolo Tonella Presence: real actions in virtual environments Giulio Ruffini - Martyn Bracewell - Maria Victoria Sanchez-Vives Mel Slater - Paul Verschure Organisers: Giulio Ruffini - Heinrich Bülthoff Through the crystal ball: Europe's position in the digital revolution by 2030 Gian Mario MAGGIO (organiser) - Sophie Beaubron Zuzana Vercinska - Afonso Ferreira - Soulla Louca Mieczyslaw Muraszkiewicz - Imrich Chlamtac Henry Markram - Giovanni Colombo - Maria Teresa Gatti - Elina Hiltunen
The body and the urban space Michael Smyth - Ingi Helgason (organiser) - Rod McCall - John Waterworth
Language and the mind: fiction or reality? Jan Hajič (organiser) - Roger Moore - Albert Kim Challenges and visions for global computing Martin Wirsing (organiser) - Ian Stark - Christos Kaklamanis Neurofunctional materials: a new bottom-up approach to information processors Victor Erokhin (organiser) - David N. Reinhoudt Bernard Schölkopf Visual analytics – Mastering the information age Gennady Andrienko - Daniel A. Keim - Jörn Kohlhammer (organiser) - Margit Pohl - Kai Puolamäki -Giuseppe Santucci
Closing address by Ondřej Liška, Minister of Education, Youth and Sports, Czech Republic Rudolf Strohmeier, Head of Cabinet of the Commissioner for Information Society and Media Closing Performance: Multimodal Brain Orchestra SPECS, Synthetic, Perceptive, Emotive and Cognitive Systems group Jonatas Manzolli, Music Composition Behdad Rezazadeh, Video art gTec, Brain Computer Interface Technology
fet 09 Science beyond Fiction
I073
Programme Committee Michel Cosnard - INRIA - Co-Chair Paolo Dario - Scuola Superiore Sant’Anna - Co-chair Heinrich Bülthoff - Max Planck Institute for Biological Cybernetics Jean-Claude Burgelman - European Commission Vladimír Bužek - Slovak Academy of Sciences Ignacio Campino - Deutsche Telecom Daniel Donoval - Slovak University of Technology, Bratislava Dieter Fellner - Fraunhofer IGD Dario Floreano - EPFL Laboratory of Intelligent Systems Elisabeth Giacobino - Ecole Supérieure d’Optique Chris Hankin - Imperial College London Janusz Holyst - Faculty of Physics, Warsaw University of Technology Jan Hrušák - Academy of Sciences of the Czech Republic Alois Knoll - Munich Technican University Vladimír Kučera - Masaryk Institute of Advanced Studies (Czech Republic) Lenka Lhotská - Czech Technical University in Prague Vladimír Mařík - Czech Technical University in Prague Chris Melhuish - University of Bristol Corrado Priami - The Microsoft Research, University of Trento Centre for Computational and Systems Biology Heinrich Stuckenschneider - Siemens Corporate Technologies Miroslav Tůma - Academy of Sciences of the Czech Republic Johan Van Helleputte - IMEC Alessandro Vespignani - Indiana University School of Informatics Adrian Stoica - NASA Jet Propulsion Laboratory Biologically Inspired Technology and Systems Group Christos Nikolaou - University of Crete European Commission Prabhat Agarwal Jean-Marie Auger Wolfgang Boch Aymard de Touzalin José Luis Fernández Villacañas
Aleš Fiala Paul Hearn Wide Hogenhout Pekka Karp Wesley Van Dessel Walter Van de Velde
I074
fet 09 Science beyond Fiction
I075
LEGAL NOTICE By the Commission of the European Communities, Information Society and Media Directorate-General. Neither the European Commission nor any person acting on its behalf is responsible for the use which might be made of the information contained in the present publication. The European Commission is not responsible for the external web sites referred to in the present publication. The views expressed in this publication are those of the authors and do not necessarily reflect the official European Commission’s view on the subject. © European Communities, 2009 Reproduction is authorised provided the source is acknowledged
http://cordis.europa.eu/fet EC credits: For further information: European Commission Directorate General Information Society and Media Future and Emerging Technologies B-1049 Brussels Infodesk: infso-ictfet@ec.europa.eu