April 2018 | volume: 40
MINING MALVERTISMENT How secure are you online? QUANTUM COMPUTING Do you know the principles and architecture?
BITCOIN
How much do you really know about the new currency?
table of contents
28
Bitcoin With the recent price spikes to over €10000 per Bitcoin at the end of 2017, a lot of attention has been drawn to the world’s largest cryptocurrency BY MORITZ KNÜPPEL
23
41
59
Mining malvertisment
Quantum computing
Transhumanism
2
40th edition
INTRO 5
Editor’s Word
6
Chairperson’s Word
7
Meet The Team
8
EESTEC Infographic
10
Postcard From Academic Events
PROJETCS
34
Internet Of Things
14
Our Homes Are Getting Smart(Er)
16
Artificial Intelligence - The Possibilities In Narrow And Strong AI
36
A Brief Summary Of VR
20
Top Tech Gadgets Of The Moment
25
Where Do You See Your Startup In 10 Years
26
Web Development
49
DEEP SPACE GATEWAY Lunar Space Station
51
Carbon NanotubeBased Gas Sensors And Electronic Noses
EESTEC Soft Skills Academy
54
Apps On Phones
56
Fiber Optic Networks
57
Microfluidic Sensors
TECH LAB 38
17
Renewable Energy
EESTech Challenge
ON TECH TOP 12
48
A Short Introduction To Cryptography
CAREER SECTION 44
3D Printing
60
Best Universities
46
PLC - Brain Of Every Industry
62
Alumni Story
65
#PowerYourFuture
47
Autonomous Vehicles
3
EESTEC magazine
4
40th edition
EDITOR’S WORD
Dear readers, It has been one year since the last issue of EESTEC Magazine. This year was full of challenges for all of us, just as any other. It was also for EESTEC Magazine, because in this edition we made efforts towards making as many technical articles as possible, while thinking about the quality. Therefore, I am delighted to say that there is an enormous joy now that this issue with countless technical articles is published! We are stepping into the new era of EESTEC Magazine and hopefully, you will find many interesting articles inside it. Are you curious to find out what the top tech gadgets or applications on phones currently are? Are you familiar with Bitcoin, Space Engineering, Transhumanism or Virtual Reality? You will certainly find that here. And much much more. If you are interested in how an experienced EESTECer is dealing with everyday business world challenges and how EESTEC contributed to that then our Alumni story is what you just might have been waiting for. Moreover, on some of the pages you can find all the information about some of our projects and network in general. Finally, I wish to receive many comments from all of you. Tell me what you loved, if this new concept is what you want for to continue, or if you wish for something to change. Also, I want to thank my members for having so much to say in this issue because that is what EESTEC is about, sharing ideas so we can all find our way to some kind of success one day!
Have a nice time reading and learning something new! Marija Milanović, Publications Team Leader, LC Belgrade
Printed by: WIRmachenDRUCK GmbH Mühlbachstr. 7 71522 Backnang Deutschland (Germany) Funded by: Techniker Krankenkasse Bramfelder Straße 140 22305 Hamburg APRIL 16th NO: 2018/1 VOLUME: 40 EESTEC INTERNATIONAL ELECTRICAL ENGINEERING STUDENTS’ EUROPEAN ASSOCIATION MEKELWEG 4, 2628 CD DELFT, THE NETHERLANDS, WEB: EESTEC.NET EMAIL: BOARD@EESTEC.NET put togehter by the EESTEC International Design Team Main Designer: Alena Delkić
5
EESTEC magazine
CHAIRPERSON’S WORD Dear EESTECers,
Another year came to an end for our Association. It’s been 32 years since the day our Association was born. 32 years ago the idea to connect all electrical engineering students across Europe was born with the vision to establish new connections, friendships and to enhance the professional lives of all members. EESTEC is developing every year through the development of its members. New projects that were established during the last years and the activities being organized, they aim to satisfy the vision of our Association. We can see a network growing bigger and stronger every year. So many generations passed through this Association and they all carry the best memories they had from their EESTEC journey. EESTEC is a big family, where someone can take different challenges, make mistakes, learn something new and develop skills that will have a positive impact on his/ her personal and professional life.
6
This Magazine assists the goal of enhancing the professional lives of our members, sharing our knowledge on our field and highlighting our uniqueness as a technical-oriented Association. This year EESTEC Magazine has made major changes towards technological topics which identify our Association. We hope that you will enjoy reading the issue of this year, learn and get inspired by the variety of topics included in it. Some of you might get interested in 3D printing, some of you might start searching for bitcoins, some of you might become a part of EESTEC and who knows, maybe some of you will become part of the next issue of our Magazine. Before we let you dive in the new issue of our Magazine, we would like to thank all those people who were around us during this journey. We believe that the future will be even brighter for EESTEC. Enjoy reading our Magazine for this year. On behalf of the Board, Natalia Saranti
40th edition
MEET THE TEAM KárolyGabányi Hey, I am a gratuated IT engineer from the Budapest University of Technology and Economics. I’m still member of LC Budapest and I would like to help with my work the organization!
AlexandruEne
AnaPantelić
Hello! I’m Ana from LC Belgrade and I’m in EESTEC for 2 years and counting. This is a unique opportunity to work on various topics and in the end collect them in one place - one issue. Enjoy it.
JózsefMák
If you spent more than 3 active years in EESTEC, then you know this is the right place for you to be in and EESTEC is your family. Hi, my name is Alex, member of LC Bucharest, and for my 5th year in EESTEC, as an alumni member, I chose to be part of EESTEC Magazine and help the team achieve its objectives. There have been a couple of months in which we worked really hard for this issue and we are glad our work has finally paid-off. We hope you will enjoy it too! :)
Hi! I graduated as an Info-Bionics Engineer, M.Sc. on the Pázmány Péter Catholic University in Budapest, Hungary. For the moment I am working as a researcher in quantum computing, and I hope to become a Ph.D. student in the near future. In my free time I do wall climbing, hiking, learn languages and, of course, I do EESTEC stuff.
MoritzKnüppel
SimonaŠutić
I hold a Bachelor of Engineering in Electrical and Computer Engineering degree and I work as an IT Consultant in Hamburg, Germany. During my studies, I served in various positions in EESTEC, most notably as its Treasurer in 2015/16. Having come across Bitcoin in 2012, I have been investing in it and developing with Bitcoin and other cryptocurrencies and technologies eversince.
Hi, my name is Simona, and I am a member of LC Belgrade. My first international team was Magazine Team, and it turned out to be the right choice for me! I had an amazing opportunity to meet a lot of new people and share our creative thoughts and ideas. We worked really hard to make this issue of EESTEC Magazine special for you, and I hope that you will enjoy reading it, as much as we did working on it.
TarikOmerćehajić
MilicaStefanović
Hi, my name is Tarik. I’m the member of LC Tuzla and part of EESTEC Magazine Team. I’ve been an EESTEC member for couple of years and I finally decided to go international. Didn’t regreat it. Hope you’ll enjoy reading the content we prepared for you ;)
Technically, an EESTEC alumni. Essentially, an EESTECer for life. “EESTECally” speaking, LC Belgrade’s fundraiser, international and local OC and EEStech Challenge project’s team lead.
AnastasiaCholuriara Individually, we are one drop. Together, we are an ocean. I’m in love with the Magazine Team! It’s so nice working in such a multicultural way, discussing and together making something unique.
NikolaPetrović Hi, my name is Nikola Petrović. I am 22 years old. My hometown is Leskovac, Serbia. I am studying robotics at School of Electrical Engineering, University of Belgrade, in Serbia. Also, I am a huge fan of astronomy and space engineering.
MihajloDžever Hello! My name is Mihajlo and I am a member of LC Novi Sad. I’ve been in EESTEC for a year and a half and during that time I have met great people and learnt stuff I never thought I will learn. All in all it is a great experince and I can’t wait to see what will future bring next. P.S. Guy below me is an asshole. Bye bye!
MilanMedić Oops! Error: 404 nothing interesting to say was found
7
EESTEC magazine
EESTEC Infographic
112
48
events in 2017
Training Team members in 15 countries
1300 active
5000
4
EESTEC members
226
JLCs
41
6
members active in EESTEC international
Observers
LCs
20.37%
25%
14.03%
15.61%
20%
8
0.31%
0.83%
2.76%
3.53%
3.98%
3.99%
4.41%
3.09%
1.41%
0.11%
0%
4.62%
5%
5.19%
5.76%
10%
10.01%
15%
40th edition
EESTEC Story “I was part of over 40 international events, including ECM and Congresses, I made so many mistakes, and learned from all of them, while having so much fun on the way. “
“I absolutely fell in love with it when I went to a workshop organized by LC Zurich in April 2005.”
“We all dreamed of having a great, big student organization, spread around all technical universities in Europe, with quality workshops, and bonding together.”
“In EESTEC we worked hard (and partied hard) and we built long-lasting friendships.”
“The knowledge and experiences from EESTEC are worth a lot, and they created a much better career path for me.”
“EESTEC offers personal growth – acquiring soft skills, and technical skills above the ones that the University is able to teach us”
Elena Mancheva Zajkovska EESTEC Alumni
9
Postcard from academic events
C.A.R.S: Come and Ride in the Smartest was held by LC Ankara in the period from 27.02. to 05.03. in 2017. Participants had an opportunity to learn everything about Smart Car Technologies and how to imply IoT Technologies to cars.
Mixed Reality: Extended Vision was held by LC Istanbul in the period from 09.12. to 16.12. in 2017. Participants were introduced to the new notion in industry, MR ( Mixed Reality ) or hybrid reality which is a perfect combination of both Virtual Reality and Augmented Reality.
T.O.O.L. Boostcamp was held by LC Athens in the period from 03.12. to 08.12. in 2017. The main goal of this workshop was to introduce participants to the various techniques and tools which will help them manage resources and lead their Commitments.
The Big Data Theory was held by LC Budapest in the period from 01.10. to 08.10. in 2017. Another great workshop. After learning about Big Data, participants immersed in the nightlife of Budapest.
3rd Design Sprint was held by LC Munich in the period from 18.06. to 25.06. in 2017. After hard work and a very long week participants were able to learn some of the most fundamental skills in graphics design.
ASW - Medrock Week was held by LC Ljubljana in the period from 28.07. to 06.08. in 2017. The event was organized in partnership with Marand d.o.o. and the main goal of the event was to find solutions for front-line healthcare challenges in the fields of blockchain applications, remote patient monitoring, data analytics, machine learning, VR, doctor chatbots etc.
10
40th edition
RE:PRINT3D was held by LC Cosenza in the period from 14.05. to 21.05. in 2017. Using 3D Printers participants were able to learn basics of the Open Source RepRap model based on the Arduino electronic system.
Reshaping Reality was held by LC Tampere in the period from 23.09. to 30.09. in 2017. After creating fantasy worlds, participants were returned to reality and unwinded in Finnish style saunas.
The final round of EESTech Challenge was held in Zurich in the period from 08.05. to 12.05. in 2017. During the 24h hackathon participants’ task was to solve one of the common problems in Switzerland, cow recognition, using supervised machine learning.
Spring Congress was held by LC Ljubljana in the period from 22.04. to 28.04. in 2017. The Spring Congress is the biggest event of our Association. During the Spring Congress, we make the most crucial and important decisions that shape the very future of EESTEC.
11
On tech top
Internet of Things Author: Milan Medić
What is IoT? Internet of Things, or IoT for short, is a global network of interconnected devices, ranging from small home appliances to vehicles and other items that are made out of electronics, sensors, and network connectivity which enables these devices to connect and exchange data between each other. Each “thing” is identifiable through its embedded system and is able to interoperate with the existing Internet infrastructure. These objects can be sensed and managed across the existing infrastructure, which gives improved efficiency, accuracy while also reducing human intervention.
The beginning of “IoT” The term “Internet of Things” was first used by Kevin Ashton of MIT’s Auto-ID Center in 1999. But the most interesting thing is that the first concept of network-connected devices originated at the start of the 80’s, or more accurately in 1982 Carnegie Mellon University. The first device of this kind was none other than a Coca-Cola vending machine which was able to report its inventory, while also reporting on whether its newly loaded drinks were cold or not. Pretty cool (pun intended), right? First glances of the future to come were in “The Computer of the 21st Century” paper, published by Mark Weiser. In this paper, Mark coined the term of ubiquitous computing, which represents a concept in computer science and software engineering where computing is made to appear anytime and everywhere. The underlying technologies that would support this concept were the Internet, advanced middleware (software glue, provides services to applications beyond those available from the OS), operating system, sensors, microprocessors, and so on. Sounds familiar? If it does then you read the first half of this article, if not, go back and read it. This paper along with some others produced the vision of IoT. Besides the aforementioned papers, there were other, also notable, contributions that would shape IoT into what we know today.
12
In the IEEE (Institute of Electrical and Electronics Engineers) Spectrum magazine, IoT was described as “moving small packets of data to a large set of nodes, as to integrate and automate everything from home appliances to factories”. Big companies such as Microsoft proposed their own solutions for IoT. One of the most notable was Microsoft’s at Work. This solution would enable common business machineries (age-old) like fax machines and photocopiers with a common protocol, which would allow control and status information to be shared with Microsoft Windows computers. In 1999 we’re introduced to Kevin Ashton. This was the guy that coined the term Internet of Things, though he likes the Internet for Things more. Identification was seen as a necessary part of the IoT structure so he proposed to use a Radio-frequency identification, but solutions as barcodes and digital watermarking were proposed as well. This would allow for such things as ubiquitous inventory control or would grant motion-picture publishers a way to enforce copyright restrictions and digital rights management. Finally, in 2004 we get a new model for future interconnection. This model describes a universe that consists of a physical world, virtual world, and a mental world. The model also includes a multi-level architecture that consists of three levels. The bottom one is the nature and the devices, which was followed by the Internet, sensor network, and mobile network, and as a top level, the human-machine communities.
Where is IoT being used now? In the age of information, we have become more dependant on the Internet than ever. Consumer grade tech has evolved with the requirements of the common man, but the industry had to catch up to trends as well. How does this all come to play? The answer is IoT. Ranging from smart watches to smart cities, IoT has become an all-encompassing entity that surrounds us without being noticed too much by the common man. As already mentioned, one of the more notable and common devices to some degree is a smartwatch.
40th edition These devices are installed with sensors and software that collect data and information about the users. This information is later processed and used for various tasks, from heart monitoring, global positioning, or even multimedia reproduction. One of the requisites for these devices is high efficiency while maintaining an ultra-low power demand and small size. Essentially they represent wearable computers. These devices mainly run Linux based operating systems, such as Android, Sailfish or Tizen, but one notable exception is the WatchOS system used by Apple on smartwatch line of products. The grandfather of these devices is none other than the digital watch which debuted in 1972. Called the Pulsar, it was manufactured by Hamilton Watch Company. It was not until 2009 that the world saw the real true smartwatch. Developed by Burg Wearables, the Burg smartwatch became the first one to have its own sim card and did not require to be tethered to a smartphone. Three years later, the world was introduced to Pebble, an innovative smartwatch that raised the most money on Kickstarter at the time, reaching a whopping 10.3 Million between the 12th of April and May the 18th, 2012. Its battery lasted up to 7 days, and it had Bluetooth 2.1 and Bluetooth 4.0(Low energy Bluetooth). The screen it used was an ultra-low-power LCD display. And as a finishing touch, it also sported a waterproof rating of down to 40 meters, both in salt or freshwater. The first ever smartwatch that was able to capture the full spectrum of a smartphone also emerged from Kickstarter. TrueSmart was produced by Omate and it has made its public debut in 2014. Since then, more and more companies were getting involved in the smartwatch business, some of those being Apple, Google, Microsoft, Sony, and Qualcomm. Another notable representative for IoT is a fairly new concept of “Smart Cities”. As we all know, maybe a little bit too well, cities usually don’t function at peak efficiency. This is where such concepts come in. A Smart City represents an urban area which uses different sensors to supply information which is used in conjunction with other physical devices connected to the network to optimize its resources. The information collected includes data from various devices and citizens. It can also collect data from transportation systems, power plants, waste management, law enforcement, schools, hospitals and others. Besides efficiency, this concept could be used for a more effective engagement with the local populace in governance, decision making, improving the city’s institutions with a form of e-governance. One of the more famous smart cities is Amsterdam which started its smart city platform in 2009. This smart city platform constantly challenges its businesses and residents to test innovative ideas and solutions for urban issues. One of those projects is City Alerts, which provides adequate information about emergency services during critical incidents. Smart city means a greener city, and one example of a cleaner
garbage collection was presented by Ecube Labs. Their solution uses a solar-powered garbage compactor bin to compress more garbage than standard trash bins. One of the less appealing aspects of a Smart City is one of its most defining characteristics. Being connected to everything, being able to manage all sorts of services means collecting various data as stated previously. But what does that mean for my privacy? Surveillance is unavoidable in the reality of smart cities. Filled with cameras and sensors, Smart Cities have been named as “safe cities” in the sense that the authorities will be able to identify and capture criminals, or even stop crimes from happening in the first place. The truth is that a definitive answer is rather complicated, but some people have been making tools to help accommodate this kind of concerns, one of those being a visual guide “Shedding Light on Smart City Privacy”. This guide will help citizens, companies and communities better understand the technologies at the heart of the smart city projects and the potential impact those technologies can have on their privacy. From smart watches, smart cities, and smart toilets (yes, those do exist), IoT devices seem to be constantly on the rise. But how big is IoT right now? Well, in 2016 there were 3.9 billion IoT devices in the world, compared to 2015s 3.9 million devices. Future predictions have an even greater number in mind. By the year 2020, it is estimated that the world will have 21 billion connected devices! Those numbers will contribute to AI becoming a “thing”. From smart light bulbs, smart fridges, coffee makers and such, there is always data to be collected. That data will probably go into facilitating machine learning. With the knowledge that it gathered, that AI would be able to easier respond to your preferences and needs. Armies have become more and more interested in IoT. Embedded and wearable systems will be able to turn a soldier into a node in the network to enhance their performance and track their well being. The industrial sector and manufacturing companies have only become more and more interested in connecting their devices. From predicting failures and improving efficiency, companies are slowly but surely seeing more and more opportunities in IoT. Airbus would be one example of those companies. They use IoT systems alongside human operators to reduce the number of errors in production. Nowadays they put planes together mostly manually. They want to change that by implementing smart tools that will be able to see if a rivet was properly placed or not. They plan on accomplishing that by tracking system performance locally, not in the cloud. The tools used must communicate locally, smart tools to smart wearables, such as glasses with a heads-up display. There’s no way of being 100% certain on what the future will hold for IoT, the only thing we can be sure on is that IoT is not going anywhere anytime soon.
13
On tech top
Our homes are getting smart(er) Author: Simona Šutić
N
owadays, where the need for energy and the awareness of conservation of nature are growing every day, the concept of a smart home is developing and improving. Measuring consumption and developing more intelligent control systems enables cost savings and increased comfort and security. The main goal of developing smart houses is the energy efficiency, but also getting the best experience in maintenance of the household for the customers. A smart house is building automation for a home. It involves the control and automation of lighting, heating, ventilation, air conditioning, and security, as well as home appliances such as washers or dryers, ovens or refrigerators. Home automation provides homeowners security, comfort, convenience, and energy efficiency by allowing them to control their devices, using a smart home app on their smartphone or other networked devices, often using WiFi for remote monitoring and control. As a part of the internet of things (IoT), smart home systems and devices often operate together, sharing consumer data among themselves and automating actions based on the homeowners’ preferences. Modern systems generally consist of switches and sensors connected to a central hub from which the system is controlled with a user interface that is interacting either with a wall-mounted terminal, a software for mobile phones or a web interface.
The year 1975 was crucial for smart homes, that is the year when the first communication protocol for home automation, X10, came to life. It sends 120 kHz frequency radio waves to existing electrical wiring in the house, which later sends signals to programmable outlets and switches. The X10 had its faults, it couldn’t be free from the radio-band noise and sometimes signals would be lost during transmission. One of the main problems with this device was that it could send signals but cannot receive them. Later, a newer, twoway version of X10 was developed.
14
At the beginning of the 21st century, a lot of new technologies were invented and developed, along with them, the smart house technology that reached some new levels. Even though X10 was still in use, new wireless technologies were invented, such as Zigbee and Z-Wave. After 2010, a lot of companies, such as Amazon, Google, Samsung, Apple and many others, have developed their own smart home products and platforms. By doing this, smart home technology has become more accessible to a wider audience and is no longer a thing of a distant future.
40th edition
Newly built homes are often constructed with smart home infrastructure in place. Older homes, on the other hand, can be fitted with smart technologies, by purchasing smart home kits that include all devices and apps needed to implement a smart home technology. While many smart home systems still run on X10, Bluetooth and WiFi have grown in popularity. A smart home is not composed of separated smart devices and appliances, but of the ones that work together to create a remotely controllable network. All devices are controlled by the main automation controller, often called a smart home hub. The smart home hub is a hardware device that acts as the central point of the smart home system and is able to sense, process data and communicate wirelessly. It combines all the different apps into a single smart home app that can be controlled remotely by homeowners. Machine learning and artificial intelligence are becoming increasingly popular in smart home systems, allowing home automation applications to adapt to their environments. For example, some voice-activated systems contain virtual assistants that learn and personalize the smart home to the residents’ preferences and patterns. Some of the most common implementations of smart house technologies, that we already have access to but might not be aware of it, are automated security systems, motion sensors, lighting control systems and many more. Smart home technology can be of great use to elderly and disabled, too. While installing the smart home systems, customers can face a couple of problems. It is important to choose all the right components that are familiar and user-friendly. For these reasons, it may be easier to start with a very basic home network and expand as enhancements are needed. Smart homes also come with some security issues. If hackers get access to your smart home, they can turn off the alarm and the light systems, leaving the house vulnerable to a break-in.
E
ven though the smart home technology has developed beyond measures, it is still far from having a fully functional and self-sustaining home. There is a long way ahead of us until we reach the point where there will be only one device to control our whole house.
15
On tech top
ARIFICIAL INTELLIGENCE The Possibilities in Narrow and Strong AI Author: Anastasia Choluriara
F
rom SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.
“The techniques of artificial intelligence are to the mind what bureaucracy is to human social interaction.’’ Terry Winograd Artificial intelligence today is properly known as narrow AI (or weak Al), that is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the longterm goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or
16
solving equations, AGI would outperform humans at nearly every cognitive task. In the near term, the goal of keeping AI’s impact on the benefit of society motivates research in many areas, from economics and law to technical topics such as verification, validity, security, and control. An important question is what happens if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks? A superintelligent AI is unlikely to exhibit human emotions like love or hate, therefore there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:
1
The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could
inadvertently lead to an AI war that also results in mass casualties.
2
The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you expected but literally what you asked for. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest.
If we’re no longer the smartest, are we assured to remain in control?
40th edition
A Brief Summary of Virtual Reality Author: Milan Medić
If you haven’t heard about VR, you probably live under a rock or have died 30 years ago. When someone tells us about VR we always picture those big goggles and some sort of weird looking controllers. Surely that’s one way of looking at VR, but there are other examples, like the one at the Tribeca Film Festival where they have a virtual reality installation called Famous Deaths.
B
ut when did this term originate? Well, the first known use of this term was in 1958, in a novel “The Theater and its Double”. Since then, the term was used in many science fiction novels and movies. But the first real-life virtual reality tech came during the 90’s, or more specific, in 1994. This headset called Sega VR1, a motion simulator and arcade attraction. This headset more or less looked like the ones we had today. But is this truly the first device that revolved around Virtual Reality? Well yes and no. It’s difficult to pinpoint on the exact origins of virtual reality because it is hard to define what VR really is. Considering the previous statement, the first known occurrence of a virtual reality device was in 1962. This machine was called Sensorama, and it was
a mechanical device. Its VR experience had five short films to be displayed while engaging multiple senses like smell, touch, sound, and sight. Now we’ll see how virtual reality devices really work. These devices, at least the consumer ones, usually consist of a headset and a couple of hands held controllers. The headset projects an image, and the controllers are usually the ones that give us some form of tactile feedback, usually vibrations, but also let us move around and interact with the virtual world. Besides this simpler form of feedback, the device can project feedback in the form of sound or even smell. First of, let’s start with an image and why things appear in three dimensions. The image that you see is not the same for
17
On tech top every eye. The left eye gets one angle of the image, and the right eye gets the other. This is how you get parallax, depth, and shading effects. Since we’re still on the topic of the image quality being projected, we need to talk about frames per second, refresh rates, and latency. This is all fine and dandy but what do these terms mean? Let’s start with frames per second. Frames per second (FPS for short) essentially represent how many images can your graphics card output in one second. The refresh rate is something a little bit different but these two are extremely connected. How fast your display can output images in one second is your refresh, usually measured in Hz. Consider that we have a game being projected to us at 120 FPS, but our monitor has a 60Hz refresh rate. That means we’re losing half of the frames our GPU is putting out. This is why “tearing” occurs. This is a phenomenon where objects on the screen have been split into two different locations horizontally, giving us a “tearing” effect. This is a problem remedied by a feature called VSync, which limits the frame rate your GPU outputs to your display’s refresh rate.There are two more things that are incorporated into making your VR experience believable. The first being latency, or, the time it takes a pixel to switch. In a more technical term, this could be the delay before a transfer of data begins following an instruction for its transfer. The fact that the latency of just 20ms can make or break a VR experience is pretty astounding. The field of view or the extent of the visible world at any given time. A human’s FOV is around 180 degrees, while with eye movement this number increases to 270. The sound is the next thing that needs to be tackled when designing a headset. The sound and the way we perceive it in the real world depend on many factors. For instance, how far the sound source is from us, whether or not there are any obstacles that might negate the effects of the soundwaves. Turns out that our brains map things that they can’t see on sound. That’s why we get that 3D sound effect. One way developers are trying to
18
mimic this in the real world by using 3D engines that are able to tie certain sounds to their respective objects. This 3D sound system isn’t anything new. In fact, Dolby has been using it since the 70’s. The challenge of implementing is still there, but the manufacturers are hard at work, trying to implement this tech into the virtual world. All of the mentioned technical details said before amount to some questions about the health hazards and immersiveness. We can split the immersiveness into controls and movement. First of, the controls. Some virtual reality tech can monitor your movement around the room with the use of specific sensors, and even help you avoid obstacles, like that couch you’ve been hitting with your toes for 20 years now. Also, this limits the size of the world you want to immerse yourself in. Consider a first-person
shooter game. One of the key aspects is the ability to run behind the cover, jump over obstacles and avoid enemy fire. Doing something like that is pretty challenging in a room. Developers have found some ways to negate this effect to some degree, like Doom VFX, where you move around the world via teleport. On one hand, it represents a really innovative way of moving around and experiencing the world in its richness but unfortunately amounts to the loss of immersion. This all relates to my next point - controls. To be able to teleport around you need to use the controllers specific to your headset. The problem with these controllers is that they’re usually clunky and you can’t interact with the environment in the way you would want. The interaction is detached from the object you want to interact with
40th edition
and is similar to your PC’s mouse. There are some controllers that try and mimic guns for example, like the ones from HTC or Oculus, but other than that it’s usually a stick with controls. Some companies are pushing for the usage of smart clothing like gloves. These are not available yet. The technical complexity needed for these devices to work is still too great and we’ll have to wait and see how things will go in the next few years. Facebook already showed its VR enabled glove that will be used to control their Oculus VR headset. The health hazard is another major issue with these headsets. Our brains learn with feedback that we get from our senses. This is really important for human development, especially at an earlier age. Tactile feedback that you would get from picking an apple from the ground amounts to the brain learning what this action feels like.
But, if you would pick up a virtual apple in a virtual world you wouldn’t get the same tactical feedback as in the real world. This all causes a surprise in our brains. Turns out that our brains have been made to minimize surprises. In order to maximize accuracy, the brain will need to adapt to that sort of interaction. This is especially hazardous for children since their brains haven’t still developed to cope with this kind of “surprises”. Virtual Reality has gained a lot of popularity in the recent years, but the question still remains: is this the future? This depends on a lot of things, and one of them is mostly the price. For your VR headset to work, you need a powerful PC or console along with a headset, which usually doesn’t come up cheap. The cheaper ones use phones and are around 100 dollars but they don’t give the level of immersive-
ness as the more expensive ones. The top priced ones cost from 300 dollars to 600 dollars. These are all consumer-grade products, and with such a big price tag, and not much to do except play some video games, there’s not a lot of appeal for regular customers to catch on. Depending on the statistics, the total sale of VR units is 3 million altogether since they’ve hit the market. This is a somewhat reasonable number considering that virtual reality became popular only a couple of years ago. Another example of why these headsets aren’t as popular is called augmented reality. If you remember Snapchat’s spinning hot-dog you know what I’m talking about. Essentially AR has two different approaches, the Snapchat one and the HoloLens one. The first one is what we see on Instagram and Facebook with those rainbow puking filters. It was first developed by Snapchat, but other companies such as Google and Facebook quickly caught on and implemented their own versions. The other way of experiencing augmented reality is through the use of a HoloLens. This is basically the same type of headset like the VR ones we talked about, but with one key difference. The screen is see-through. You can see the world around you like you would do with a normal pair of glasses, but you have images being projected on top of real surfaces. This tech is mostly used for research purposes and is not available to the wider public. But what it does is hint at all the possibilities that this device can bring. The consumer market is a big part of what propels a certain technology into the mainstream. Currently, AR is beating VR in that category, but does that really mean VR is just a passing trend? According to big companies like Google, Microsoft and Facebook, no, no it’s not. These companies are giving a lot of effort into developing new and interesting ways of using VR. Constantly refining the existing principles of VR, while adding new and exciting things into the mix. One day we may even be able to feel a virtual apple but until that day comes, we just have to sit, wait, and use that godawful rainbow filter on Instagram.
19
On tech top
Top Tech Gadgets of the moment Author: Simona Šutić
Have you ever thought what would you do without all those gadgets in your life? It would definitely be much harder. With the gadgets, we have all the information and control that we need, only a click away. Just think of how easy it is to adjust the lighting in your home by only using one device, or how fun it would be to make a 3D model of whatever you want in just a couple of minutes. All of these appliances can contribute to your life being much more fun, easy and carefree. Here is a list of some of the top tech gadgets at the moment:
Smart Watches
P
roducers of smart watches have developed it into a cool gadget to have. Besides the obvious use of telling the time, they represent a front end for remote systems, such as a smartphone, communicating with it using various wireless technologies. They can be based on different operating systems, like Android OS or iOS, and by pairing it with your device, you can display all the messages, emails and other important information. They are often used in sports. A smart watch can track the user’s heartbeat and their location through GPS and save the data, which makes exercising even more fun and easy to do. Also, you can use it for scheduling, organizing and even as an alarm. One of the plus sides to this smart device is that it is wearable, so you don’t have to worry about it falling out from your pocket or losing it in your bag since it is always on your hand.
GoPro Camera Now here is the perfect device for all the adventure and extreme sports lovers out there. GoPro cameras are the smallest, yet the best ones to capture all the perfect moments of your life. The camera is made with good stabilization of the picture so that it can capture great motion without losing the quality of the video. Performances of the GoPro cameras are high above any other similar devices, and they allow anyone to make high-quality videos
20
without being an expert in this field. They are great for making memories from your travels. You can capture anything from hiking, scuba diving, riding a bike, or even just chilling by the pool. The GoPro company also makes drones that are meant to be used with their cameras. This probably isn’t a necessity for every GoPro user, but it is an amazing gadget for those who professionally make videos because you can film from great heights and capture the widest angle image.
40th edition
Virtual Reality Sets Virtual Reality, VR for short, is a computer-generated simulation of a realistic experience. The goal is to let the customer get the lifelike experience founded on reality or sci-fi. It is used in a lot of fields today, but what we are going to talk about is the VR which any person can experience. There are a lot of affordable VR sets today, some of them are Samsung Gear VR, Google Daydream View, Sony PlayStation VR, and one of the best on the market currently, HTC Vive. The prices go from 90$
Bluetooth speakers Probably one of the most useful things on this list, Bluetooth speakers are a fast and easy way to get music playing throughout your home and even on the move. The best thing about them is that they are portable and you can connect them to any device, such as your laptop or your smartphone. You can play your music wherever you are. They vary in size, but they are usually small enough to fit into any bag. They are made
up to even 600$. There are two types of VR headsets, mobile and tethered. Mobile headsets are shells with lenses into which you place your smartphone. The lenses separate the screen into two images for your eyes, turning your smartphone into a VR device. Examples of this kind are Samsung Gear VR and Google Daydream View, and they are very affordable, with prices around 100$. Tethered headsets like the Oculus Rift, HTC Vive, and PlayStation VR are physically connected to PCs (or in the case of the PS VR, a PlayStation 4). Even
though they have cables all around, putting all of the actual video processing in a box you don’t need to directly strap to your face means your VR experience can be a lot more complex. The use of a dedicated display in the headset instead of your smartphone, as well as built-in motion sensors and an external camera tracker, drastically improves both image fidelity and head tracking. Tethered VR sets do come on the pricey side, with prices varying from 400 to 600$. All in all, this is a great gadget for all enthusiasts.
with batteries that can last for hours without recharging. The fact is that they can’t produce a sound of the same quality as Hi-Fi setup, but they do have more utility. The most important features to go by when choosing the right speaker for you are the quality of the sound and length of the battery life. Some of the best Bluetooth speakers right now are UE Boom and UE Wonderboom, Bose SoundLink Revolve, JBL Pulse and many more.
3D Printers
Now you might be wondering how is a 3D printer an affordable gadget, but it is. There are some brands which make affordable printers that are very easy to use, like XYZPrinting, Monoprice, M3D, and others. Just imagine that you can make a model of anything you wish for in a short period of time, and you can do it all by yourself. Price range is from 200-600$. But if this option is too expensive for you, there is another solution. 3D Pen is a standalone device that can create 3D printed objects without the need for computers or Computer-aided Design software. You can create anything, anywhere.
21
On tech top
Amazon’s Kindle Over the last couple of years, people have turned to reading books on their devices such as smartphones or tablets, rather than physically flipping the pages of a real book. Amazon has noticed this, so they decided to design a series of e-readers called Kindle. Through the Kindle store, customers have access to nearly 6 million of e-books, newspapers, magazines and other digital media. The Kindle operating system uses the Linux kernel with a Java app for reading e-books. Kindle devices support dictionary and Wikipedia look-up functions when highlighting a word in an e-book. The font type, size, and margins can be customized. Also, it is possible to listen to an e-book, which is convenient for people with vision impairment or blindness. All in all, it is a good device, which replaces one book with a light version of a tablet and contains millions and millions of different books.
Google Home Didn’t you always want one of those home assistants who know everything? Google has developed a brand of smart speakers called Google Home. These speakers enable users to speak voice commands to interact with services through Google’s in-
22
telligent personal assistant called Google Assistant. A large number of services, both in-house and third-party, are integrated, allowing users to listen to music, control playback of videos or photos, or receive news updates entirely by voice. Now you don’t ever have to feel alone, you can al-
ways talk to your Google Assistant. Google Home devices also have integrated support for home automation, letting users control smart home appliances with their voice. So it represents all in one kind of gadget, that could be very useful to have around.
40th edition
Mining malvertisment
Author: Károly Gabányi
H
ackers could create malware to cause harm by spying on people, stealing, ransoming, taking control over computers to use their resources. And in such cases, the criminals can collect money only through the browsers. Installing cryptocurrency miner program on our common computer is not worth it because we cannot produce as much money as our power consumption costs. The trick is using many other users’ power resources and remedy thereby the costs, creating money for ourselves. This is already profitable, at least until law enforcement finds you. Cybercriminals have reached success in cryptocurrency mining by installing malware on the user’s computer to use them to mine money. To use a simple user’s computer like this, the machines must be infected or they have to exploit vulnerabilities. But why would they take care of the infection and bypass security, if all they need is a browser with activated JavaScript?
peers, and transactions are recorded on a digital public ledger which is named the blockchain. The transaction and the ledger are encrypted by cryptography.
Mining cryptocurrency requires high computational power Users have their own private passwords to use as keys. The transactions are matched up by public codes, which belong to the private passwords. All users have access to the blockchain. A transaction is sent to all those programs that try to validate the transactions and blocks. Those users who use special software to solve the cryptographic task, are called miners. The main purpose of miners is to maintain the ledger but the first who solves the task gets a few coins as a reward. Miners can summarize their computing power and share the collected coins.
Investigation A group of criminals bought some kind of advertisement service and used it to distribute malicious adverts by JavaScript to use the users’ resources for mining cryptocurrency. The malicious script was named “JS/CoinMiner.A” by the ESET company who detected the new method. It is possible that the user realized the performance load, but the attackers targeted video streaming and in-browser gaming websites, to make the user accept the bigger resource consumption. The ESET analyzed the telemetry and realized a partially distributed malvertising. A CPU-intensive task is prohibited by most of the advertising network, to protect the user experience. But not in this case! There are scripts which are multithreaded and others which use only one thread. There is the main JavaScript file that launches the workers. And the workers mine different cryptocurrencies: Feathercoin, Litecoin, Monero. Feathercoin and Litecoin were inspired by Bitcoin, but they are using different hash algorithms to avoid using custom hardware. But mining these requires CPU power and lots of memory. The Monero feature has stronger privacy than Bitcoin. The blockchain is not transparent and the hash algorithm needs a lot of memory.
A browser with enabled JavaScript was the attacker’s target In this case, criminals mine non-popular cryptocurrencies, because of popular ones, like Bitcoin, require special hardware to make it worthwhile.
Cryptocurrency mining
CPU consumption of an affected website [3]
In case of a bank credit, the bank issues the currency and keeps the ledgers. In case of cryptocurrency, an algorithm does these. Cryptocurrency is transferred between
23
On tech top The affected countries are Russia, Ukraine, Belarus, Kazakhstan, and Moldova. Those websites that were injected with the malicious code had a common language. The execution of scripts can be done by malvertising or the hardcoded snippet of the JavaScript code. Malvertising is buying traffic from an ad network and distribute malicious JavaScript instead of an advertisement. The hardcoded snippet of JS is when a website injects the malicious script.
Law Mining cryptocurrency in browsers is not a new idea. In the past, several other services provided browser Bitcoin mining. These services were shut down because the mining process wasn’t efficient using regular CPU and GPU.
Your opinion?
enforcement able to adapt and What do fight against you think is ‘innovative’ happening criminality? in your own Would they country? Are react in time your lawmakers and protect and law the users?
Bitcoin is the best-known cryptocurrency In 2013, an MIT students’ company offered a web service to mine Bitcoin. To avoid displaying ads, websites could inject the mining script to mine Bitcoin. Because they used users’ computing power without the agreement of those, the students had to face the New Jersey Attorney General. In the end, the web service was shut down. In May 2015 Attorney General John J. Hoffman said: “But innovations that affect consumers must operate in compliance with the law. No website should tap into a person’s computer processing power without clearly notifying the person and giving them the chance to opt out – for example, by staying away from that website.” Division of Consumer Affairs Acting Director Steve Lee: “However, this potentially invasive software raised significant questions about user privacy and the ability to gain access to and potentially damage privately owned computers without the owners’ knowledge and consent. As privacy threats become more and more sophisticated, State law requires us to protect the interests and safety of New Jersey consumers.”
24
Sometimes cybercriminals have to face law
40th edition
Where do you see your startup in 10 years Author: Milan Medić
We have all heard success stories from the Silicon Valley. Young and aspiring people following their dreams, sacrificing everything in order to make it big. We have seen how the wild wild West looks like on the silver screen with shows like Silicon Valley. But are those articles and shows deceiving or is there truth to them?
W
hen creating a startup you need to start with a good “idea”. That idea should be something that will generate a lot of users and attract investors. The show “Silicon Valley” portrays this step in a very truthful way. On the show, you can see a lot of people that are trying to pitch ideas to the investors on competitions, the most famous of them being the Tech Crunch Disrupt. These entrepreneurs always claim that their product will be something that people would want and that it would revolutionize the way we interact with the world. Too bad that most of those ideas are pretty rubbish. Some would say that it’s easy to bash on somebody else’s idea and that’s true. But the bigger picture shows that a very big chunk of startup
ideas don’t have the world implications their creators want them to. A great example of those “ideas” is the Washboard startup which tried to make money by mailing people coins. The idea was that you’re always out of coins and what would be a good way of getting those coins? Oh yeah, the company should mail you 20$ worth of coins every month. So, what’s the problem with this? Well, the cost of mailing you 20$ every month was 27$. Naturally, the company failed sooner than anticipated. After passing the “idea” step it’s time to start realizing your idea into a product. You gather around a good team. All experts in their respective fields, ready and waiting to start working on the “next big thing”. Fast forward a couple of months later and all you’re left with are soaked tissues and a broken dream. What happened? It all comes down to “it’s not my job”. A lot of startups fail because they don’t want to get involved with a business process or business development. The CEO says that he only needs to lead, the lead engineer only wants to design the product and etc.. Nobody was focused on the nitty-gritty details of actually managing the company as a growing business. The entrepreneurs that see that those details play a pretty crucial role in their company’s development have a pretty good chance of succeeding and growing their company. Which leads me to the third important stepping stone, growth. This is represented by how much your company is able to grow and generate funds. A great idea will skyrocket and generate enough funds to keep it going for a long, long time. The company that doesn’t grow enough won’t generate funds, and if it doesn’t generate funds it won’t be able to sustain its business for long. Even if your startup doesn’t generate enough funds you should still be able to recover and get back on your feet. Being able to change your product, take a new marketing approach, shift the business or even start from the ground up will prove useful at some point. You have to be able to recover from blows and work on the issue together as a team. Versatility isn’t just about the skillset, it’s also about the mindset (Forbes). Having a good and versatile team by your side may prove more valuable than originally anticipated. So, in light of all this, where do you see your startup in 10 years? It’s difficult to predict if your startup will be successful or not. There are a lot of factors which I haven’t covered that can and will impact your company in the ways you won’t be able to predict all the time. But, having a good sense of the aforementioned stepping stones will help you get a sense of what are the most common ways you can fail. If you want to make a startup be wary of these obstacles and keep your head up high. You never know, you might be the next Elon Musk!
25
On tech top
Web development 101 Author: Milan Medić
M
any people spend their days surfing the web without really knowing who built the webpage they’re currently browsing or how the online store they buy from really works. The short answer is - magic. If you’re not satisfied with that answer, here’s another one. The answer is web development. This answer looks simple enough, but the looks can deceive. Web development actually encompasses a lot of things, from the way your website looks to its “behindthe-scenes” structure. Taking that into account, one would assume that those areas are handled by different people. And that certainly is true, from a certain point of view. When manufacturing was first introduced in the old days, work was split between people that possessed skills which were required at different points in the manufacturing process. For example, we are making a toy car. I would be the one to make the shell of the toy car, you would put on wheels and the third person would paint it. This speeds up the manufacturing process but requires more workers. Smaller shops had just one or two workers that did everything a five-man team would do, but of course, slower. Just like the force, your website is split into two distinct parts the light side and the dark, the Jedi and the Sith. But what about the middle? The Bendu? You see, most of the time web development is split between the frontend, the backend, and the small(er) number of developers called full stack developers. So what do each of these guys do? First and foremost we’ll start from the front because that’s the first thing you see when you go to a webpage. Frontend developers are usually the ones that design the look and feel of the webpage. The responsibility of a frontend developer may vary from job to job. For example, if the developers can be tasked just with designing the look of the webpage without touching any code, they are called web designers. On other jobs, they may assume the position of a UI/UX designer. These people are tasked with creating the User Interface, or the look of your website, while at the same time studying and researching how people use the sites and make the necessary changes through a lot of testing. People who are working as a frontend developer usually need to know HTML, CSS and lastly Javascript, depending on the job requirements. Hyper Text Markup Language (or HTML
26
for short) represents a markup language that was invented during the 90’s by a guy named Tim Berners-Lee. HTML represents the bones of your webpage, the text, the paragraphs, images and etc. The sad days of gray and un-appealing web pages are long gone thanks to Håkon Wium Lie, who introduced us to Cascading Style Sheets (or CSS for short). CSS is responsible for making your webpage look pretty and fashionable. Changing fonts, colors and even layout of the text or images on your webpage. Last but not least comes the ugly duckling of programming languages Javascript. Invented by Brendan Eich in 1995, it was developed by the name Mocha, but at launch, the name was changed to LiveScript which in turn later became Javascript. This is where the age-old question comes to play. Are Javascript and Java related? No, they are not. The name Javascript was chosen because it would make Javascript more popular since Java had gained a huge popularity in the programming community at the time. Introduction aside, what does Javascript, (or JS for short) do? JS is responsible for making your webpage interactive. For example, if you want your
text to appear out of nowhere when you click a button, you would use JS. Things get a little more complicated from here. You see, a lot of developers use these “things” called frameworks. These frameworks are here to make a developer’s job easier by providing certain features that might, for example, require less code from the developer. JS represents a kind of framework hell because it spawned a lot of frameworks that made JS easier to use. The language itself isn’t all that confusing, it just has a lot of strange quirks that some people find annoying or completely unnecessary and downright awful. So to combat these quirks, developers started making frameworks. Some of the most well known are Angular.js, React, Vue, Ember and so on. This all makes for frontend developer jobs that clearly specify for a developer by requesting him/her to know a specific framework. You might find a job offer, but it’s only for people that know React or Angular. Taking this into account some people say that it’s best to focus on just learning the frameworks since there are not many job offerings based on plain JS. That has some merit but in the recent years, there is a resurgence of plain JS
40th edition
developers, since many quirks that have plagued the language have been fixed. Also, JS has learned from its “children” and has adopted some much-needed features. So, you could always argue that you should learn plain JS first and then jump onto the framework train. Either way, there is no right or wrong way. This is all fine and dandy if you want to create a static webpage that has no other functionality. But what about those times when you need to create a website that has an online shopping feature? This is where the backend development guys come in. Backend developers are responsible for the “behind-the-scenes” workings of a website. The database that stores all the articles in the webshop was probably made and maintained by a backend developer. There are also the ones responsible for safekeeping your credentials and confirming your identity when you purchase something. That “back-end” runs on a server, which is why backend development is sometimes called server-side development. So, which technologies do these people use? While usually knowing the basics such as HTML and CSS, they are also quite skilled with
a server-side language such as Python, Ruby, C# or PHP. Also, they utilize databases such as PostgreSQL, MySQL, Oracle SQL and such. So, with all these technologies to choose from, there must be some sort of encompassing term that includes all the technologies used in a certain project and that term is the stack. Two of the more popular ones are LAMP and MEAN. The LAMP is a stack that consists of Linux, Apache (server), MySQL (database) and PHP the programming language. MEAN in turn consists of MongoDB (database), ExpressJS (framework), AngularJS (framework) and Node.js the server-side programming languages. These are one of the more common stacks, although there are many of them. A stack is chosen depending on the size of the project or the needs it has to meet. So, it’s more than likely that a backend programmer would know more than one database or programming language at the same time. There are times when both frontend and backend are being done by the same person, and that person is a full-stack developer. So, if it can all be done by the same person why split it? First of all, it’s really difficult for one person
to know it all, and it takes a lot of time and effort. And as we all know time is money, and with the ever-changing landscape, that is programming, it’s really hard to stay up to date with the latest technologies used. This is one reason but there are others that I would like to mention. The first one being the ease of maintainability. It’s easier to maintain a webpage when many people have been doing a project and have a good understanding of how their part works as opposed to having one person do everything and trying to keep it all together. Also, when a worker quits it’s easier to find a replacement since that person has been tasked with doing just one part of the webpage and you still have 90% of the team that was working on the rest of it. If the one person that has done the webpage by himself leaves, then the company would have a hard time finding a suitable replacement that would know how to maintain said website. Lastly, you may wind up in a situation where you would have many frontend apps linked to one backend app and vice-versa. Saying all of this it would be a good idea to sum up how a project would look like when split between the frontend and backend developers. First of all, we have Mark, a web designer, then we have Susan the frontend developer and Igor a backend developer. The three of them are making a registration page for example. The look of the web page starts from Mark. He uses Photoshop or InDesign to design the look of the website. This consists of choosing a font, colors, input fields, buttons, their layout and etc. When Mark finishes his design he sends it to Susan with the specification of the project. Now, we come to coding. Susan’s job is to convert Marks idea into a website. So Susan starts coding, utilizing HTML, CSS, and Javascript to achieve the desired effect. After she’s done with her work we have the webpage with all its buttons and fields, ready for use. You start typing in the required data in the fields and submitting them but nothing happens. Why? Well, Igor was watching cat videos and forgot to do his work. So, it’s time for Igor to do his magic. He starts by making a validation for errors like invalid email, invalid password or username. If your data was entered correctly it will move on to the database Igor made and you, the user will be able to register yourself. Now that web page needs to be maintained and scaled. Again, these three will do what’s required of them and scale the app. But what if Susan quits her job? No problem, here comes Angela, a new frontend developer in your company. She has the skill set required, so she only needs to get acquainted with the code Susan wrote and she’s good to go. The backend is still maintained by Igor and a new design is being brewed up by Mark.
27
Focus
The dollar of cryptocurrency Author: Moritz Knüppel
With the recent price spikes to over €10000 per Bitcoin at the end of 2017, a lot of attention has been drawn to the world’s largest cryptocurrency. Major news networks all across the world started reporting about both the technological and economic implications of this supposedly new technology and new users signed up to cryptocurrency exchanges in such numbers, that many of them were forced to close their registrations due to lacking the necessary server capacities to handle the sudden influx. 28
Today however, we want to take a step back from the hype and look at the history of Bitcoin, the underlying technology, some of the recent controversy and well as future technological innovations.
I
n 2008, a person or group under the pseudonym of Satoshi Nakamoto published a paper entitled “Bitcoin: A Peer-to-Peer Electronic Cash System”, laying out the concept of a virtual currency that does not require a central controlling entity such as a bank or Paypal, but instead would function on a pure peer-topeer (P2P) basis. Previously, the idea of the pure P2P currencies had failed on the problem of double spending, that is, paying with the same money in two different places, since electronic files can simply be copied, unlike gold which cannot be replicated or paper currency, which is difficult to reproduce. Satoshi’s proposed solution was described as an “ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work“, a concept that we know today as the blockchain.
Blockchain 101 To understand the concept of the blockchain in the context of cryptocurrencies, we need to be familiar with the following terminology: Bitcoin address: a Bitcoin address is simply a public key from a public-private key pair. The public key is what is used to receive the funds and can, for example, be published on a website, while the private key is used to authorize transactions and must be kept private. Transaction: a transaction describes the transfer of Bitcoin from one address to another. In order for this transaction to be recognized as valid by the network, it is submitted for verification into the mempool. Mempool: a list all unverified transactions that have been sent to the network. Using P2P methods, this list is kept in sync across the network. This pool is filled with new transactions being broadcasted
40th edition
29
Focus and is emptied by successful verification of transactions by miners. Bitcoin Client: a program used to generate new addresses, broadcast transactions and verify that mined blocks are not violating the rules of the Bitcoin network. The functionality of the blockchain in Bitcoin can be explained by the following scenario: Alice has one Bitcoin and wants to send this Bitcoin to Bob. Bob sends Alice his Bitcoin address and Alice uses her private key to authorize a transaction from her own Bitcoin address to that of Bob. This transaction is broadcasted into the Bitcoin network, which is a purely decentralized P2P network not controlled by any institution or government, and added into the mempool. Carol is running a Bitcoin miner, a program that is looking for the solution to what can be described in a simplified manner as follows: hash_function(hash_of_previous_ block, transactions_to_be_ verified[], nonce) < threshold
T
he hash value of the previous block is static for new block to be mined, the list of transactions is chosen by the miner from the mempool, the hash function is hardcoded into the Bitcoin clients, the threshold is automatically adjusted by the network in a way that on average a new block is found every 10
30
Visualization example of a blockchain. minutes. In order to generate a result of the hash function that is smaller than the threshold, the miner adds a few bytes called the nonce to the hash. In a brute-force-manner, the miner now attempts to find a nonce that will solve the inequation above. Depending on how small the threshold is defined at the point of mining, this may take a few million tries or, as today, 10^22 attempts. At some point, Carol, or another miner, will have found a nonce that solves the puzzle. She publishes this information, that we now call a block, to the network. Because the only way to solve the problem is through brute force, Carol has proven that she has used a lot of computational
power (usually called mining power) and hence hardware cost and electricity to find this block. As a reward, she is allowed to send herself a certain amount of Bitcoin â&#x20AC;&#x153;out of nowhereâ&#x20AC;?. The new block is checked for validity by hashing the data transmitted by Carol and checking if it fulfills the inequation above, the transactions included in the block are removed from the mempool and all miners begin to attempt mining the next block. Since a block contains the hash value of the previous block, the two are linked together. It is not possible to change anything in the previous block without changing the resulting hash function, which is stored in the next block. The blocks thereby form a linked chain, hence the term
40th edition â&#x20AC;&#x153;blockchainâ&#x20AC;?. Those who have taken classes in C will maybe feel like they have seen something similar before. Through these two cryptographic means, Alice signing the transaction with her key and Carol committing massive hashing power, the network can now be sure, that this transfer is valid. From now on, Bob (or rather: his Bitcoin address) holds the Bitcoin and can now send it again.
Implications, problems and limitations The blockchain, as employed in Bitcoin, is a long list of all transactions that ever took place on the network since its beginning, packaged into max. 1MB blocks. This leads to two important implications: Bitcoin is absolutely NOT anonymous. Handing cash from person to person can be anonymous if there is no record of it, such as receipts or photographic evidence. Sending Bitcoin is based on the idea of a tamper-proof ledger, a long list of all transactions that cannot be changed. If Alice sends Bob a Bitcoin, that transaction will be publicly available as long as Bitcoin exists. Bitcoin is instead pseudonymous. As Bitcoin addresses can be generated at will, there is no required link between the address and the user. It is visible that an address received a Bitcoin, but not who controls this address, i.e. who has the pri-
vate key, unlike with for example regular bank accounts or PayPal accounts, which require verification to be set up. The size of the Blockchain is ever-expanding. At the time of this writing, the blockchain is about 150GB in size. There are now more transactions broadcasted in a given time than there is space for in the blocks. At the time of this writing, the mempool regularly holds 130MB in open transactions, meaning that if there were no new transactions broadcasted, it would take 130 blocks, almost a full day, to process all of them. This size issue makes it impractical to run a full Bitcoin client on, say, your phone. Luckily there are solutions, called lightweight clients, that do not require the entire blockchain to be downloaded, but instead, receive information about address balances from and broadcast transactions via a server that does run a full client.
The Scaling Debate and Possible Solutions
The biggest issue with Bitcoin at this moment is its scalability. With an average Bitcoin transaction being around 250 bytes in size and a block, which is mined every 10 minutes and is up to 1MB in size, we get the following calculation where TPS is the (maximum) transactions per second: TPS < 1MB/250byte / 10 minutes = 400/minute = 6.66/second
This means the Bitcoin network is not capable of handling more than 7 TPS. Comparing this to the credit card network VISA with its average 2000 TPS (4000+ TPS at peak times) we realize that Bitcoin in its current form cannot compete as the global payment system it was originally thought up to be. While various positions on the scaling debate exist, the following main opinions can be distinguished: Reinterpret the meaning of Bitcoin. While originally envisioned as a P2P cash system, some argue that Bitcoin, with its high divisibility, limited supply and ability to be safely stored, is closer to an investment asset like gold than to a currency. Followers of this train of thought argue that Bitcoin does not need a high TPS because Bitcoin transactions should be rare in the same sense that physical gold rarely changes location. Proposed solution: do nothing. Modify the parameters of the TPS calculation. In order to increase the TPS, all three factors on the right could potentially be altered. Two distinct categories can be identified based on their outcome: Alter the (effective) size of transactions: Proponents of the so-called Segregated Witness (SegWit) solution have suggested splitting the transactions into its two components: The pure transactional data, identifying which address sent how much to whom, and the so-called witness data, consisting of digital signatures. In the SegWit solution, a block is filled with up to 1MB of transactional data, with the cor-
31
Focus
responding witness data being attached at the end. While technically, this creates a block that is bigger than 1MB (effectively a little under 4MB), clients have the option to only download the transactional data, keeping the block size at effectively 1MB for them. Proposed solution: SegWit Increasing the blockchain size: Beyond or as an alternative to SegWit, it has been proposed to either alter the block time from 10 minutes to e.g. 5 minutes or increase the block size from 1MB to e.g. 8MB and beyond whenever it becomes necessary. These solutions would have the consequence that the blockchain would grow in size much more rapidly, making it nearly impossible to run a full Bitcoin client yourself. The consequences of this have - literally - split the Bitcoin community, with one side arguing that this would lead to unacceptable centralization and the other arguing that having only Bitcoin miners and large companies run dedicated full clients is fine. Proposed solution: Bitcoin Cash Add scalability in a second layer. Bitcoin can be seen as a network protocol, comparable to the IP protocol that acts as the basis for our current internet. In the same manner that TCP and UDP build on top
32
of IP, it is theoretically possible to build a protocol layer on top of Bitcoin, that would utilize the Bitcoin network functionality, but reach much higher scalability. Proposed solution: Lightning Network
Bitcoin, Core, Cash, Legacy While those seeing Bitcoin as a gold-like asset are fine with maintaining the status quo, those wanting a working digital currency are effectively divided between altering the block size or scaling on a second layer. This division leads to a so-called hard fork in the Bitcoin network. On August 1st 2017, a group of modified Bitcoin clients, that until this point was part of the regular Bitcoin network, started to reject blocks coming from unmodified Bitcoin clients, effectively creating a separate network. The second crucial alteration was that these clients would support blocks with a size of up to 8MB. As blocks of this size are incompatible with the traditional clients, they will be rejected. Effectively, two mutually incompatible blockchains now exist, one with a block limit of 1MB, one with 8MB, effectively
creating a separate cryptocurrency. The terminology of these depends on which side of the debate is asked. Proponents of the 1MB blockchain argue that their chain is the “real” Bitcoin and hence should be called this way, while the other should be called either Bitcoin Cash or - to distinguish it further from Bitcoin - Bcash. On the other hand, proponents of the 8MB chain argue that their chain makes Bitcoin usable as a currency again, reinstating the vision of Satoshi Nakamoto and hence they are the “real” Bitcoin, while the 1MB chain is either called Bitcoin Core or Bitcoin Legacy. Depending on the source, the definitions can alter, but the consensus among third parties at this point appears to be that the 1MB chain will be called Bitcoin and the 8MB chain Bitcoin Cash. For the sake of consistency, in the following paragraphs, this will be the terminology the author will choose.
40th edition
A strongly abstracted visualization of lightning via multiple relay nodes, made to show the analogy to IP routing.
Lightning Network The second layer scaling solution proposed by Bitcoin is called the Lightning Network (LN). While LN itself is an incredibly complex, here is a strongly simplified attempt at an explanation: Imagine two parties conducting transactions with each other, e.g. you and your local coffee shop where you buy a coffee every morning. You don’t want to pay with cash every morning. A possible solution could be the following: You buy a box that can be locked with three locks and that requires at least two of the keys to open. The coffee shop gets one of the keys, you get one and a mutually trusted third party gets the third one. You place €100 into the box and all parties lock the box. Now every morning, when buying a coffee, you and the coffee shop both keep a list of how much money you
already spent. This goes on, let’s say, 30 times, after which, you spent your €100 or you want to get the leftover money out. Now you and the coffee shop open the box together and each party gets their share. Instead of doing 30 cash transactions, you effectively did two: placing the money into the box and taking the money out of the box. If either party is unwilling to open the box, the third party helps to open the box. This is essentially the approach of the Lightning Network. Both parties place an amount of Bitcoin into an address (called a “payment channel”) for which two out of three keys are required to retrieve the funds. You have one, the other party has one and the third one is not an individual, but a so-called time locked signature, meaning only after a predefined amount of time, it is possible to retrieve the funds using only your or the other party’s key. All of this is recorded on the blockchain. Using cryptographically signed transactions, you and your coffee shop now send “rights” to that money back and forth. At each time, you have a digitally signed balance statement of who owns how much of the Bitcoin in the payment channel. This goes on until both agree to release the money or the predefined time is up and one party wants to release the funds. At this point, one of you publishes the latest balance. After 30 minutes, each party is credited with their respective amounts. What is somebody published an old balance to steal Bitcoins? The other party can publish the more current balance and - as a penalty to the cheater - receives all the Bitcoins in the payment channel. The network part of LN begins when we link together these payment channels. You have a payment channel with your coffee shop, the coffee shop with Dave and Dave with Eve. If you want to send a Bitcoin to Eve, you can either open a new payment channel with Eve or give the coffee shop one Bitcoin, which gives one Bitcoin to Dave through their common
payment channel, after which Dave sends one Bitcoin to Eve through their common payment channel. All of this is ensured cryptographically. The more payment channels exist, the more likely it is that you will find a path from you to the person you are trying to send money to. The routing through different intermediary persons can be compared to the way IP packets are transmitted through multiple intermediary nodes on their way to the destination.
Future Outlook Which technology will end up prevailing cannot be said. With the crypto market becoming a multi-billion-dollar and probably soon multi-trillion-dollar economy, the race for the best technology for every problem is up. Will it be a second layer off-chain scaling solution such as the Lightning Network? Will it be increases of block size into potential terabytes? Or will be something completely different, like cryptocurrencies such as IOTA or RaiBlocks, that stopped using blockchains altogether in favor of other concepts?
Only time will tell.
33
Projects
EESTech Challenge The second year of EESTech Challenge gets underway! EESTech Challenge is the technological competition organized by EESTEC, that has the aim to create opportunities for European students to gain knowledge in the field of EECS and develop a professional network. Throughout the whole Europe, 24 different cities and universities, that are part of the EESTEC network, are involved in the competition. The competition will consist out of two levels, Local Rounds, and the Final Round. Local Rounds will be conducted in 24 cities: Ankara, Antwerp, Athens, Aveiro, Belgrade, Catania, Chemnitz, Craiova, Delft, Dublin, East Sarajevo, Gliwice, Krakow, Ljubljana, Milano, Novi Sad, Patras, Sarajevo, Tampere, Tirana, Trieste, Tuzla, Xanthi, Zurich. Teams will consist out of three members who should demonstrate not only their knowledge about Big Data Analysis but also motivation, dedication, and team spirit. Those skills should guide teams towards the victory! The winning team of each Local Round will pass to the Final Round, where they will compete with their fellow students from other Commitments. The Final Round is going to be held in May 2018 and it will take place in Novi Sad. Last year was the initial year of EESTech Challenge. EC team worked very hard to set the basis for the future generation. With 16 Local Rounds and the Final Round placed in Zurich, 320 students got an opportunity to gain knowledge and compete in the field of Supervised Machine Learning. EESTech Challenge managed to make it count since its first year. A unique brand was created, something legendary was initiated. Thanks to the successful first year, we were able to expand the competition to 24 Local Rounds. This year we aim to reach the project’s full potential.
34
With the right vision, we are aiming to create new opportunities for students and spread the importance of the technological aspect in real life environment. EESTech Challenge is shaping students’ future by giving them knowledge and necessary experience. By focusing on the important topic, as Big Data Analysis, we are connecting three very important aspects: companies, universities, and students. EESTech Challenge is something bigger; all our Commitments united together, striving to achieve a recognizable project and spread the EESTEC spirit all over Europe. This project can and will influence each active EESTECer’s life, it will have an impact on Commitments, it will shape EESTEC internationally, it will affect Europe. Taking into consideration crucial factors such as trends in the technological industry, input from university professors and researchers and a survey among EESTECers, Big Data Analysis is the best topic for this year. Big Data became popular after 2010 when companies realized that they need to store huge amount of information. After that, everyone was interested in Big Data but no one was able to give the exact definition of Big Data. Why do we need Big Data: Cost reduction New products and services Faster and better decision making Big Data is actually data that can’t fit into a single machine (desktop, laptop, server etc.). It is not only defined by volume as it may seem. It also needs to be heterogeneous and generated at high speed. Big data is expanding on 3 fronts: Volume, Velocity, and Variety. Volume refers to a size of data. It is increasing all the time and helping us to store as much information as possible. Volume is important both for private users (storing movies, music etc.) and companies. Velocity is another factor. The flow of data is massive and continuous. Big Data systems need to handle the speed at which the data comes in and possibly automatically apply different techniques to investigate and analyze it. It is very important to have information in real time and to have it as quick as possible. Variety enables us to store different types of information. You can have a range of data (sensor data, video, audio, text etc.) at the same place. In order to create an objective competition
as possible, the EESTech Challenge Online Seminars team organize several Online Seminars -webinars- employed as a free educational tool. The Online Seminars will be delivered through an online platform, both for EESTECers and non-EESTECers, to provide them with the chance to get to know the field of Big Data and gain basic knowledge before the Rounds. Since EESTEC emphasizes education and self-improvement, this time also we would like all of our colleagues to benefit from this project by expanding their knowledge. First Online Seminar was held on 13th of December. The topic was “Introduction to Big Data Analytics” and it was conducted by Dr. Konstantinos Pelechrinis, an associate professor at the School of Computing and Information at the University of Pittsburgh. According to youtube statistics, we had 426 real-time viewers and in total, until now we have reached 649 views. Comparing to last year’s statistics, we had a rise of 63%. There will be two more seminars, one in January and one in February. Each semi-
40th edition
nar will have more advanced topic than the previous one and a different lecturer. The goal is to provide participants with adequate knowledge in the field of Big Data. With an aim to gather European students around the technological topic and provide both educational and competing atmosphere, EESTech Challenge expresses its full potential.
ARE YOU READY FOR THIS CHALLENGE?
35
Projects
EESTEC Soft Skills Academy
36
From 2010 until January 2018, 65 Local SSAs were organized
First SSAs were organized in 2010 in Belgrade and Krakow
19 cooperating Commitments in the academic year 2017/2018
Around 850 people in the academic year 2016/2017 attended Local SSAs in Europe
Approximately 72 trainers in local SSAs from the first round in the academic year 2017/2018
Around 340 hours of training sessions conducted in local SSAs from the first round in academic year 2017/2018
40th edition
October November Belgrade 2016
December January Athens 2016
March April
Tuzla 2017
May June
Xanthi 2017
Thessaloniki 2017
EESTEC Soft Skills Academy (SSA) is an international project organized by Electrical Engineering STudentsâ&#x20AC;&#x2122; European assoCiation (EESTEC). The project is formed to be divided in two parts: the international and the local part. The local part consists of EESTEC Commitments organizing events named SSA in different European cities. The international part is a team responsible for making sure the events are following standards and to facilitate the collaboration and communication between local events. SSA provides a unique opportunity for further development of ambitious students with assets of non-formal education. These assets, will be expected from them in their future working place. Every local event provides trainings in certain topics of soft skills (personal and social skills). These trainings are held by certified EESTEC trainers, HR representatives of companies and other experts in the field of soft skills. In the end, participants are provided with CV supplements. There is no participation fee and every SSA is open towards all students. This academic year, local SSAs will be held in 19 cities and more than a 1000 student will have a chance to work on their self-development.
37
Tech lab
A Short Introduction to Cryptography
Author: Károly Gabányi
In this article, I am going to expose the basics of cryptography in a short, understandable way. Cryptography is the science of secret communication. It influenced the history of mankind for more than 2 thousand years and was mostly used by governments and the military. Caesar Cipher In the antiquity, during the Gallic Wars, Julius Caesar used the Caesar Cipher, the essence of which was to shift each of the letters by 3 places forward in the alphabet. The last 3 letters of the alphabet were represented by its first 3 letters. If you knew the rules, it was easy to cipher or decipher a text. Of course, you can use any number to shift the letters. But remember, after the number of the ABC letters is exhausted, the result of the encrypted text gets repetitive. For example, in the English alphabet, there are 26 letters, so there would be no difference between shifting by 2 or 28. To decode an encoded text, in this case, is pretty easy. It is possible to do an exhaustive key search because there are only 26 keys (0 is also an option). Take an encrypted word and try to decode it with all of the possible numbers of letter shifting. If the result is a word with a meaning, you have possibly found the key.
Simple Substitution Ciphers Having learned from the Caesar cipher’s weaknesses, let’s reorder the letters of the alphabet randomly and pair each of them with the letter originally in their place. Now each letter is shifted by a unique number. You can use the same key for encryption and decryption, you just have to know
38
which letter means which. True, the key is long and difficult to memorize. To do a key search, in this case, you have to try all possible combinations of the letters. In case of 26 letters, this means 26!, which is approximately 4•10��.
But there is an optional way to decrypt an encrypted message in this case, too. You need to use the statistics of the language. Each of the letters has an expected relative frequency in a particular language. To break the cipher, you have to compare the statistics of the decoded text with the statistics of the language of the original text. 200 characters of encrypted text are almost sufficient to create the statistics of it. Of course, there is no guarantee that the text statistics agree precisely with the statistics of the language. There are a few popular letters that are possibly dominating the original text, so their ciphered equivalents can easily be identified. The non-dominating letters could be detected by means of exhaustive key search or deduction. For an example, see the tables above: Knowing that original text is in English, we can deduce that the letter X probably decrypts to A, and the letter S probably decrypts to C.
The bigram cipher The bigram cipher is a better option to prevent a deciphering attack through the statistics of the language. Simple substitution is used on pairs of consecutive letters, so-called bigrams. Now the key is very long, and the space containing all the keys (the keyspace) becomes too large, although the encrypted text can also be decrypted based on statistical attacks. It is highly possible that long messages are dominated by a few bigrams. Thus, the bigrams are somewhat better, but the method contains the same weakness.
The one-time pad Based on the previous historical examples, we can now see what kind of cipher we need. All of the ciphers which provide practical security can be broken with an exhaustive key search. The one-time pad is a classic example of a perfectly secure cipher system. Even if you try all of the elements of the keyspace, you can not break it. This is because, in case of the one-time pad, every key is used only once. A one-time pad is using a large nonrepeating set of truly random keys. The coder uses each key on an original text to encrypt it and after that, the key is not used anymore to encode. But using a key only once is not practical.
40th edition
Symmetric cryptography Block ciphers If two parties want to communicate securely, they will encrypt their communication. The parties agree on a cryptosystem and on a key. Before dispatching a message, they encrypt it using the agreed algorithm and key. The receiver - the other party - decrypts the encrypted text with the same algorithm and key to read the original text. It is important to keep the algorithm and the key secret as long as the message must remain secret. The 2 basic types of symmetric algorithms are block ciphers and stream ciphers. Block ciphers operate on blocks of data. Stream ciphers operate on streams of data.
Stream Ciphers Stream ciphers encrypt 1 bit of plaintext to encrypted text. There is a keystream generator which produces a stream of bits. This keystream is added (XOR) to a stream of plaintext to produce the stream of ciphertext. Encryption security depends on the keystream generator. If it produces a stream of 0s, the encoded text will be the same as the plaintext. If the keystream has a pattern, the algorithm shall not provide proper security. If the keystream generator produces a stream of random bits, then we get the one-time pad and perfect security. A good keystream generator generates a stream of bits that look random.
By using a key, the block cipher transforms the input into an encrypted output. In order to reproduce the original message from the encrypted message, it is important that the transformation should be invertible. Because the number of value elements in a keyspace is finite, a block cipher can be attacked using an exhaustive key search. The attacker only has to try about half of the possible keys. To protect against an exhaustive key search, we should use a longer key. There are other options for an algorithmic attacker, such as exploiting the structure of the block cipher. We should think about this during the block cipher design process. In case of a block cipher there are 3 important requirements: Completeness: All bits of the output should depend on all bits of the input and on all bits of the key. Statistical independence: There must not be any statistical relationship between the original blocks and the encrypted blocks Avalanche effect: In case of a 1-bit change in the input, the likelihood of all output bits changing is 50%. Block ciphers operate on given sizes of blocks of data, they are more frequently used and they are easier to implement in software. Stream ciphers can be implemented very efficiently in silicon, so instead of implementing in software, the logic is easier to build in hardware.
not identical, we call it asymmetric cryptography. It is computationally hard to deduce the private key from the public key.
So if you have a key pair, you make the public key available and keep the private key secret, others can send you an encrypted message. The sender just has to encrypt the message with your public key and send the encrypted data to you. If anyone else has access to the channel and can see the encrypted data, they can not decrypt it. You with your private key are the only one who can decrypt the message.
Public-Key Cryptography RSA (Asymmetric The Rivest-Shamir-Adleman algorithm is cryptography) The weakness of the symmetric algorithm is the key. If someone knows it, he/ she can decrypt the ciphertext or modify the message. If the parties who communicate are far away from each other, they have to share their keys. A third party can eavesdrop on this sharing process and after that, the whole communication is no longer secure. In public-key cryptography, we use two different (but related) keys - one is public and one is private. Anyone with a public key can encrypt a message, but in order to decrypt one, you need the corresponding private key. Because the keys are paired together but are
a cryptosystem for public-key encryption. It uses two different but mathematically linked keys. In RSA, the public and the private keys can be used to encrypt a message. To decrypt it, you will need the opposite key from the one used to encrypt it. RSA security is based on the difficulty of factoring large integers that are the product of two large prime numbers. Multiplying these numbers is easy, but to identify the original prime numbers from the product is considered computationally unfeasible. The strength of the encryption depends on the key size. Doubling key length leads to an exponential increase in strength but decreases the performance. RSA keys used nowadays are 1024 or 2048 bits long. With today computers, you
39
Tech lab can not compute the private key within a reasonable time. RSA has become the most widely used asymmetric algorithm. Many protocols are using RSA: SSH, OpenPGP, SSL/TLS. RSA is also used by software, like browsers. In quantum computing, the so-called Shor algorithm can decrypt the private key of the RSA, but quantum computers are not yet powerful enough. In case of powerful quantum computers becoming commonplace, there are alternatives to RSA. Those cryptosystems which remain secure even if the attacker has a quantum computer, are called post-quantum cryptosystems.
Hybrid cryptosystems Symmetric algorithms are fast to execute but easier to break. Public-key algorithms are harder to break, but also slower to execute. Because of these facts, hybrid systems have been devised. Public-key algorithms are used to encrypt the symmetric key and after that, the symmetric key can be shared with the other parties who have to decrypt the message. With such key distribution, the symmetric key remains secret and the parties use it to communicate securely.
The encryption systems described above are just the tip of the iceberg, a teaser about this field of science. There are several other protocols, standards, systems, and implementations. The topic is continuously evolving. Something seems secure now, and later it is found to be weak. There are always new ideas to break or protect security. This was a short summary of the basis for the ones who are not familiar with the domain of cryptography.
REFERENCES [1] Buttyán Levente, Vajda István: Kriptográfia és alkalmazásai [2] Fred Piper & Sean Murphy: Cryptography: A Very Short Introduction [3] Bruce Schneier: Applied Cryptography [4] http://searchsecurity.techtarget.com/definition/RSA [5] Daniel J. Bernstein & Tanja Lange, Post-quantum cryptography: https://www.nature.com/articles/nature23461 [6] http://searchsecurity.techtarget.com/definition/asymmetric-cryptography [30] S. Howorka and Z. Siwy, ACS nano 10, 9768 (2016). [31] G. Huang, K. Willems, M. Soskine, C. Wloka, and G. Maglia, Nature communications 8, 935 (2017). [32] Z. Dong, E. Kennedy, M. Hokmabadi, and G. Timp, ACS nano 11, 5440 (2017).
40
40th edition
Quantum Computing Principle and Architectures
Authors: Károly Gabányi & József Mák
The basic unit of quantum information is a qubit which is stored and processed by a quantum computer.
Qubit
T
o indicate the state of qubits, the Dirac bra-ket notation is used. A qubit is a two-state quantum system the states of which we denote by |0> and |1>. These are what we call “standard” basis vectors. A qubit can be in a superposition of its basis states and in informal language this is often referred to as “the qubit can take the value one and zero at the same time”. What it mathematically corresponds to, is that the linear combinations of the basis states can describe the state of a single qubit. If the qubit is in state Ψ=a|0>+b|1>, with complex coefficients a and b, upon observing the qubit we find it either in state |0> or |1> with probability |a|2 and |b|2, respectively. (This weird property of the atomic world, namely that we can only talk about the physical behavior of systems in terms of probabilities, is in close connection with the renowned Schrödinger’s cat paradox, and has puzzled physicists and philosophers alike.)
Fundamental concepts
S
uperposition and entanglement are the two fundamental concepts in quantum computing.
superposition: The weighted sum of two or more states is called superposition. As described above, mathematically it corresponds to linear combinations of quantum states, a|0⟩+b|1⟩ where the absolute square |a|2 and |b|2 of the complex coefficients represent the probability to find the system in state |0⟩ and |1⟩, respectively.
entanglement: In a state like this, the whole system can be described definitely, even though the parts cannot. This is a correlation between the qubits. If you measure the state one of two entangled qubits, the other qubit’s state will also be immediately known to the observer, even if it is thousands of kilometers away. This weird non-classical feature of quantum mechanics plays a key role in making the quantum computer so powerful.
Because the absolute square of the numbers a and b have to add up to one and only their relative complex phase matters, we can represent them on the surface of a sphere. (Two complex numbers have four degrees of freedom, and these above two conditions imposed on them leave only two degrees of freedom free.) This so-called Bloch sphere is a very useful visualization tool for qubit state-manipulation.
Algorithms Like classical computers do manipulations on bits, quantum computers can do manipulations on qubits. Here will be introduced some of the basic algorithms what quantum computers use.
REFERENCES [1] “Ibm q experience beginner user’s guide,” https://quantumexperience.ng.bluemix.net/proxy/tutorial/beginners-guide/introduction.html, accessed: 2018-02- 18. [2] “Ibm q experience beginner user’s guide,” https://quantumexperience.ng.bluemix.net/qx/tutorial?sectionId=full-user-guide&page=introduction, accessed: 2018-02-18. [3] D. Stebila and M. Mosca, “Post-quantum key exchange for the internet and the open quantum safe project,” in International Conference on Selected Areas in Cryptography. Springer, 2016, pp. 14–37. [4] G. Kurizki, P. Bertet, Y. Kubo, K. Mølmer, D. Petrosyan, P. Rabl, and J. Schmiedmayer, “Quantum technologies with hybrid systems,” Proceedings of the National Academy of Sciences, vol. 112, no. 13, pp. 3866–3873, 2015.
41
Tech lab Grover’s Algorithm
It is also called Grover’s search algorithm. A quantum computer has superior speed over the classical computer when it does searching in databases. This algorithm speeds up an unstructured search problem quadratically.
Deutsch-Jozsa Algorithm
Let’s take a function f(x) which takes an n-bit input and returns 0 or 1. The function can be a constant which means that it gives the same value on all inputs or it can be balanced, meaning that it gives 0 and 1 on exactly half of the inputs. The goal is to decide if the f(x) is constant or balanced. Classically this requires 2(n-1)+1 function evaluations but in the quantum world using the Deutsch-Jozsa algorithm, the question can be answered after one evaluation.
Shor’s algorithm
The security of our world relies on a mathematical conjecture, that finding the prime factors of an integer is a hard problem currently not solvable in polynomial time. This conjecture is utilized by the RSA algorithm, one of the most popular algorithms in security. However, the Shor algorithm can solve prime factorization in polynomial time and possibly renders the RSA insecure. To design cryptosystems that resist attacks by quantum computers is a current topic of cryptography and there exist protocols for quantum-resistance.
ARCHITECTURES FOR QUANTUM COMPUTING
I
n classical computer devices usage, largely different principles are used for long- and short-term information storage, information processing and forwarding. In a quantum computer, this should be no different, because qubits being able to store information for a long time are at the same time hard to manipulate, whereas qubits that are easily manipulable lose information quickly to the environment. Below you can read about some interesting qubit implementations, whereas for a thorough introduction to the topic, see ref. [4] that discusses how different physical implementations can serve as different parts of quantum computing architectures.
Fig. 1: Schematic for a fullerene-based spin quantum computer. Paramagnetic atoms are trapped inside fullerene molecules and thus their electronspin is more protected from environmental decoherence.
Molecular qubits
Solid-state qubits
The first successful manipulations of quantum bits were performed by researchers at IBM on spins of nuclei in molecules with liquid-state nuclear magnetic resonance (NMR) [5], [6] (the latter article being the implementation of Shor’s algorithm to decompose 15 into its prime factors 3 and 5) but it quickly became apparent that liquid-state NMR is not an appropriate platform because of the lack of scalability and small signal-to-noise ratio. [7] Nothing stops solid-state solutions from working, however, and as a proof-of-concept in [8] electronic spins in special molecules were coupled to a superconducting electric circuit and were shown to function as qubits. More exotic solutions include protecting the spin state of an electron or nucleus from the environmental noise by placing it inside a fullerene cage. [9] (See fig. (1) for an illustration.)
Kane [12] has proposed to encode quantum information into spins of donor atoms in a silicon crystal and since then the idea came to a realization. A group has presented a 39-minute lifetime quantum memory [13] (which is extremely long compared to the few microsecond lifetime of the presently best working circuit and cavity QED
Quantum computing with ultra-cold atoms and ions
Fig. 2: Sketch of the experimental apparatus to couple electronic spins at nitrogen vacancy centers in diamond to superconducting qubits. The superconducting qubits serve as information processing elements, whereas the electron spins in the diamond are used as memory
One of the recent highlights of this field is the 51-qubit quantum simulator implemented in a joint collaboration of Russian and American scientists at Harvard. The machine uses rubidium atoms trapped with optical tweezers, that are then subsequently coupled together with lasers, and after a time of free evolution are probed via atomic fluorescence. [10] This architecture is possibly capable of investigating phenomena in many-body quantum physics that are out of reach for current classical computers and has been used by the groups the simulate the dynamics of a simple quantum mechanical model system. (The idea to simulate quantum systems with other quantum systems came among others from the Nobel laureate R.P. Feynman. [11])
solutions, see later) whereas others have experimentally demonstrated an all-electric control (i.e. without external magnetic fields) of the nuclear spin of the phosphorous dopant using the donor-bound electron as a transducer. [14] It is noteworthy that silicone-based qubits have the possible advantage that they may be fabricated with existing integrated circuit technology. In other experiments, qubits made of superconductors (more on them later in this article) were coupled to electronic spins enclosed in diamond. [15] Whereas
[5] I. L. Chuang, L. M. Vandersypen, X. Zhou, D. W. Leung, and S. Lloyd, “Experimental realization of a quantum algorithm,” Nature, vol. 393, no. 6681, p. 143, 1998. [6] L. M. Vandersypen, M. Steffen, G. Breyta, C. S. Yannoni, M. H. Sherwood, and I. L. Chuang, “Experimental realization of shor’s quantum factoring algorithm using nuclear magnetic resonance,” Nature, vol. 414, no. 6866, p. 883, 2001. [7] W. S. Warren, “The usefulness of nmr quantum computing,” Science, vol. 277, no. 5332, pp. 688–1690, 1997. [8] C. Bonizzoni, A. Ghirri, M. Atzori, L. Sorace, R. Sessoli, and M. Affronte, “Coherent coupling between vanadyl phthalocyanine spin ensemble and microwave photons: towards integration of molecular spin qubits into quantum circuits,” Scientific Reports, vol. 7, no. 1, p. 13096, 2017. [9] W. Harneit, “Spin quantum computing with endohedral fullerenes,” in Endohedral Fullerenes: Electron Transfer and Spin. Springer, 2017, pp. 297–324. [10] H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner et al.,
42
40th edition
information processing on superconducting qubits is fast, they lose information to the environment quickly (a process known as decoherence). At the same time, spins at a so-called nitrogen-vacancy center in a diamond (in the crystal lattice an atom is missing and the carbon atom next to it is substituted with nitrogen) are well protected from the environment thus they are well suited as a memory but they are hard to manipulate. Combining the advantages of the two technologies, efforts are made to use superconducting qubits as processors and spins well-protected from the environment as memory. (See fig. (2) for an illustration)
Circuit Quantum Electrodynamics
Quantum computing with ultra-cold atoms and ions is often done in superconducting electromagnetic resonator cavities where coupling between a single photon and a single atom in the cavity may be achieved. This technology bears the fancy name cavity quantum electrodynamics (QED) [16] and for long it was the most successful in performing qubit operations. [17] At the beginning of the 2000’s however, two groups of scientists from ETH Zürich and Yale University presented circuit quantum electrodynamics [18], [19] in which they have used superconducting microwave integrated circuits to manufacture qubits and couple them. (See fig. (3) for an illustration.) Since its advent, circuit QED has had a sparkling career that is maybe the best marked by Google, IBM, and Intel trying to build their quantum computer using this architecture.
Fig. 3: Photograph of a 45 qubit circuit quantum electrodynamics chip. [20], [21], [22] All three have promised or have demonstrated a 49 qubit chip, but the desired quantum supremacy (building a quantum computer that outperforms the best classical supercomputers) is yet to come due to e.g. hardships with entangling all 49 qubits together and controlling their states. It is also noteworthy that IBM has provided free cloud-based access to programming its 5 qubit chip.
Photonic qubits
Not surprisingly, using photons propagating in air or vacuum instead of the electromagnetic modes of a microwave cavity is best suited to transmit quantum information. Although photonic chips are also made and Shor’s algorithm has been realized in one [23] recent news that has shaken the world were Chinese scientists’ experiments with creating entangled photon pairs over 1200 km. [24] Creating and forwarding entangled photons (non-clas-
sical correlation between the quantum mechanical states of photons) to large distances is key to many techniques that are able to encrypt a message securely. Using this technology, the same scientists made a video call secure from eavesdropping between Vienna and Beijing. [25]
SUMMARY The examples in this article clearly show that we are living in a very exciting age where big changes in computation and telecommunication technology are coming and we have to be prepared for it. The timeliness of these topics are also indicated by the 1 billion euro flagship program the European Union has launched to develop quantum technologies. Will there be true quantum computers in the next few years? Will we be able to securely communicate through quantum links in our lives? Only time will tell.
“Probing many-body dynamics on a 51-atom quantum simulator,” Nature, vol. 551, no. 7682, p. 579, 2017. [11] R. P. Feynman, “Simulating physics with computers,” International journal of theoretical physics, vol. 21, no. 6-7, pp. 467–488, 1982. [12] B. E. Kane, “A silicon-based nuclear spin quantum computer,” nature, vol. 393, no. 6681, p. 133, 1998. [13] K. Saeedi, S. Simmons, J. Z. Salvail, P. Dluhy, H. Riemann, N. V. Abrosimov, P. Becker, H.-J. Pohl, J. J. Morton, and M. L. Thewalt, “Roomtemperature quantum bit storage exceeding 39 minutes using ionized donors in silicon-28,” Science, vol. 342, no. 6160, pp. 830–833, 2013. [14] A. J. Sigillito, A. M. Tyryshkin, T. Schenkel, A. A. Houck, and S. A. Lyon, “All-electric control of donor nuclear spin qubits in silicon,” Nature nanotechnology, vol. 12, no. 10, p. 958, 2017. [15] C. Gr`ezes, Y. Kubo, B. Julsgaard, T. Umeda, J. Isoya, H. Sumiya, H. Abe, S. Onoda, T. Ohshima, K. Nakamura et al., “Towards a spinensemble quantum memory for superconducting qubits,” Comptes Rendus Physique, vol. 17, no. 7, pp. 693–704, 2016. [16] S. M. Dutra, Cavity quantum electrodynamics: the strange theory of light in a box. John Wiley & Sons, 2005. [17] “Advanced information on the nobel prize in physics given in 2012,” https://www.nobelprize.org/nobel prizes/ physics/laureates/2012/ advanced-physicsprize2012 02.pdf, accessed: 2018-02-18. [18] D. Vion, A. Aassime, A. Cottet, P. Joyez, H. Pothier, C. Urbina, D. Esteve, and M. H. Devoret, “Manipulating the quantum state of an electrical circuit,” Science, vol. 296, no. 5569, pp. 886–889, 2002. [19] A. Blais, R.-S. Huang, A. Wallraff, S. M. Girvin, and R. J. Schoelkopf, “Cavity quantum electrodynamics for superconducting electrical circuits: An architecture for quantum computation,” Physical Review A, vol. 69, no. 6, p. 062320, 2004. [20] “Google reveals blueprint for quantum supremacy,” https://www.technologyreview.com/s/609035/google-reveals-blueprint-for-quantum-supremacy/, accessed: 2018-02-18. [21] “Quantum computing: Breaking through the 49 qubit simulation barrier,” https://www.ibm.com/blogs/research/2017/10/quantum-computing-barrier/, accessed: 2018-02-18. [22] “Intel reveals 49-qubit quantum computing chip,” ttps://www.top500.org/news/intel-reveals-49-qubit-quantum-computing-chip/, accessed: 2018-02-18. [23] A. Politi, J. C. Matthews, and J. L. O’brien, “Shor’s quantum factoring algorithm on a photonic chip,” Science, vol. 325, no. 5945, pp. 1221–1221, 2009. [24] J. Yin, Y. Cao, Y.-H. Li, S.-K. Liao, L. Zhang, J.-G. Ren, W.-Q. Cai, W.-Y. Liu, B. Li, H. Dai et al., “Satellite-based entanglement distribution over 1200 kilometers,” Science, vol. 356, no. 6343, pp. 1140–1144, 2017. [25] “Beijing and vienna have a quantum conversation,” http://physicsworld.com/cws/article/news/2017/ sep/29/beijing-and-vienna-have-a-quantum-conversation, accessed: 2018-02-18.
43
Tech lab
3D Printing
Author: Marta Błaszczyk
Why should we get a closer look at 3D Printing? How will 3d printing affect our life? Have you ever imagined printing our own clothes? Have you ever thought about printing your own house? Or that printing parts of the human body will be possible?
I
know that it is hard to imagine it but 3D printing will have a significant impact on our society, on the way we produce tools, clothes and even body parts. What is 3D printing and how does it actually work? 3D printing or additive manufacturing is a process of making three dimensional solid objects from a digital file, by adding material layer by layer. -(https://www.wikipedia.org/) It all starts with the creation of a 3D model in your computer. This digital design is, for example, a CAD (Computer Aided Design) file. A 3D model is either created from the ground up with 3D modeling software or based on data generated with a 3D scanner. With a 3D scanner, you’re able to create a digital copy of an object. (-https://3dprinting.com/) Now you have the basis of what 3D printing is and how actually you can start doing it. The questions are why do we need it? How will 3D printing change the way we live?
1
Make it how YOU WANT it to be
While buying some things, do you have this feeling that you would change something? That maybe you would like to have something bigger, smaller or just in a different color? Companies care about the clients simply for the reason, that because of them they are earning money. If clients know that they can order something how they envision it, of course this company is winning in front of its competitors. That is why customization will be a huge part of the business in the future as a result and this will be achieved thanks to 3D Printing.
44
2
Design and architecture
We all live somewhere: either in a flat, house or in dorms and up until now these were all built by architects and engineers. Imagine that this work would be done by one thing - 3D printers. Last year Chinese construction company 3D printed an entire two-story house onsite in 45 days! (mashable.com) In the future, we can expect more advanced project being built by the use of 3D printers. For instance, Dubai-based construction firm Cazza Technologies has announced plans to build the world’s first 3D printed skyscraper (http://www.independent.co.uk ) What else can we expect? Maybe you will print your own house?
3
Medicine
Printing organs? Is it possible? Is that even safe? ETH Zurich researchers have made a soft artificial heart created of silicone and designed as a human heart is made. Many artificial hearts have been already created. However, this one is the very first one 3D printed! (3dprint.com) This silicon heart is proven through several tests to behave like a real human heart. Unfortunately, due to some problems, it can not be implanted in a real human body. Nicholas Cohrs, a doctoral student who developed the silicon artificial heart explains: “...Our goal was not to present a heart ready for implantation, but to think about a new direction for the development of artificial hearts.” (3dprint.com) Imagine that in future this will be possible. Printing organs for people who suffer from many different diseases will save their lives. 3D Printing will definitely change positively the entire medical industry. Does it mean that even the whole human body can be printed? This technology will affect many fields of industry: from business, by architecture and design, through fashion to medicine. So far it is seen as a huge opportunity for a mankind to grow, to improve. Thanks to this we will be able to build our houses quicker, we will be able to create our own products and customize them and also if our body organs are not working we will be able to print them and replace what is damaged.
We will all be affected by this technological industry. Our lives will be changed by 3D printing. Is it good?
MartaBłaszczyk I have been the EESTech Challenge PR Coordinator and currently I am the VC-EA in Board of EESTEC. I am an Architecture student and since I joined EESTEC I was facinated with technology which lead to 3D Printing which is also part of my studies. I got inspired how this field can evolve in the nearest future.
40th edition
45
Tech lab
PLC - Brain of every Industry Author: Tarik Omerćehajić
PLC programming - the birth of ladder logic
A
gain with the abbreviations? Well, it’s a common thing now in tech industry. So, what does the PLC stand for and where can I use it? To get the answer to this question, we need to go all the way back to the New Year’s Day in 1968. On this day, Programmable Logic Controller (or shorter PLC, as you guessed it) was born. The main reason for creating this kind of device was for replacing any complicated relaybased machine control systems. One of the people who worked on the first PLC was Dick Morley, who is considered to be the “father” of the PLC.
Where and why should I use PLC?
PLC
s are mostly used in the car industry but can be found in many industries, many machines where a high level of automatization is needed. They are designed to withstand the harshest environments. One single PLC processor is capable of controlling only one process, or a whole manufacturing empire, even internationally.
Design of modern PLC device
46
How to control one of these devices?
PLC
programming is based on relay schematic layout or more popularly these days Ladder Logic. Back when the PLC was invented, the creator of this device needed a way to make this new product easily understandable to the technicians and maintenance electricians. Since they were already using relay schematic layout, this kind of layout was used for developing the first PLC programming software. Ladder Logic is still one of the most popular ways of programming PLCs.
The main task of the timer is to keep an output ON for some specific time. The three different types of timers that are commonly used are a Delay-OFF, a Delay-ON, and a Delay-ON-Retentive. Timers can also be used to determine if an event fails to occur. Counters are primarily used, as their name says, for counting items, like counting a number of passing items and etc. There are three types of counters: Up counters, Down counters, and Up/Down counters.
Starting with the basics, there are two types of contacts in PLCs. A normally open contact and normally closed contact, where one would close the electrical circuit when activated, and the other would open it.
These contacts are actually representing real inputs like sensors, switches, buttons and etc. Output in Ladder Logic is represented by a circle shaped object. Almost always it’s on the right side of the diagram. Each output can be used only once in the program.
Outputs are usually motors, pumps, lights, timers, etc. They can also be used as inputs to other rungs in the ladder diagram. Most of the basic programs can be completed using only inputs and outputs. But sometimes, we will want to turn something on after a delay or count the number of times a switch is hit. For these tasks, PLC industry is using something that is called Timers and Counters.
To conclude, PLCs are well adapted to a range of automation tasks. You will find them in virtually all new and most existing manufacturing, processing, printing and packaging machine equipment. Ladder logic is getting improved each day and with the use of the modern PLCs, you can use any kind of instruction for creating your program.
40th edition
Autonomous Vehicles – Is Future Already Here? Author: Mihajlo Džever
F
irst attempts to make a vehicle fully autonomous were in 1920s, but only in 1980s first truly an autonomous prototype was developed and it still needed a person behind the steering wheel to fix its mistakes. Since then, many breakthroughs were made and we seem to be as close as it gets to developing fully autonomous cars, the ones which don’t need a human behind the steering wheel. An autonomous vehicle is one that can guide itself without human conduction. To achieve this, it uses an array of sensors, cameras, radars, real-time 3D maps and gigabytes of specialized software to „see“ the road in front of it, behind it and around every corner. It takes a constant stream of data coming from all corners of the vehicle and translates it into the motions of driving on freeways, city streets, and even suburban school zones. By doing this, a self-driving vehicle is able to navigate almost any terrain in any weather condition.
The two biggest players in developing and building the cars themselves are Google and Tesla who both own armada of driver based cars that were modified to become self-driving. For instance, in 2009. Google successfully tested its autonomous driving systems on several Toyota Prius vehicles doing 10 uninterrupted 150-kilometer routes. Since then, Google has increased the level of autonomy to the point that there was no steering wheel in the car and it has moved its tests into the populated areas.
As the development speed was increasing and a lot of different kind of autonomous systems were in research, the need to classify autonomy was introduced and today we have 5 levels of autonomy described. The classification was done by SAE International and the first one to accept this classification was the U.S. Department of Transportation. First level vehicles have assistance system of either steering or acceleration/deceleration using information about the driving environment with an expectation that human driver performs
all remaining aspects of the dynamic driving task. The second level implies both steering and acceleration/deceleration systems while the driver needs to perform everything else. The third level of autonomous vehicles only needs drivers to respond appropriately to a request to intervene. The fourth level is completely autonomous but the driver can still take over and drive. Finally, the last level takes the driver completely out of the equation and he is neither needed nor he can take over the control. But, how will autonomous driving affect everyday lives of ordinary citizens? According to its developers, it will make our lives much easier. It will allow elderly and young people, who can’t or are legally prohibited to drive, to not waste time in finding transportation. It will give people the pleasure of doing something else but driving during their long trips and it will decrease the number of accidents and human casualties by excluding the human factor from driving. But what about people whose work is related to driving, like taxi, bus and truck drivers? What happens to them when a new technology comes on a big door? Will there be a place for them in the new world or will they be victims of this technology as many were before? Nobody knows, the only thing left to do is to wait and see.
47
Tech lab
Renewable energy Author: Anastasia Choluriara
J
ust as there are many natural sources of energy, there are many renewable energy technologies. Solar is one of the most well known, wind power is one of the most widespread, and hydropower is one of the oldest. Other renewable technologies harness geothermal energy, bioenergy or ocean energy to produce heat or electricity. Renewable energy is produced using natural resources that are constantly replaced and never run out. Equally exciting are the new enabling technologies that help to manage renewable energy so it can be produced day and night while strengthening the electricity grid. These enabling technologies include battery-storage, supply prediction, and smart grid technologies.
Solar energy
Ocean energy
Solar energy is energy generated from the sun’s heat or sunlight. Solar power is energy captured from the sun which is converted into electricity, or used to heat air, water, or other fluids.
Ocean energy is a term used to describe all forms of renewable energies derived from the sea. There are two types of ocean energies: mechanical energy from tides and waves, and thermal energy from the sun’s heat.
Hydropower Hydropower uses the force or energy by moving water to generate power. This power is called ‘hydroelectricity’.
Bioenergy Bioenergy is derived from biomass to generate electricity and heat, or to produce liquid fuels for transport. Biomass is any organic matter of recently living plant or animal origin, such as agricultural products, forestry products, municipal and other waste.
Geothermal Geothermal energy is stored as heat in the earth. The heat is generated by the natural decay over millions of years of radiogenic elements including uranium, thorium and potassium.
48
Hybrid/enabling technologies A hybrid technology is one that integrates a renewable energy generation technology with other energy generation systems such as Solar with gas or wind.
Wind energy Wind energy is generated by converting wind currents into other forms of energy using wind turbines. Wind turbines convert the force of the wind into a torque (rotational force), which propels an electric generator to create electricity.
40th edition
DEEP SPACE GATEWAY Lunar Space Station Author: Nikola Petrović
A
lthough it was a science fiction in the late ‘60s, deep-space exploration has become pure reality. It has all started with first attempts of reaching Earth’s orbit, and not long has passed since humans built the first space station and sent it to the orbit of our home planet. The International Space Station (ISS) was launched in 1998 and has been successfully working since then. However, as humans tend to aim for larger and larger, many countries around the globe agreed to build the new space station that will be one of the first phases in a large space exploration project.
Deep Space Gateway project (DSG) will provide the human race with one of the first space stations that will be working under complete deep space conditions. The ISS is protected from cosmic rays and solar winds by the Earth’s magnetic field. The DSG project will be a space station that will be orbiting in lunar vicinity, thus being completely affected by cosmic radiation, which is very important for space exploration research and further deep space missions. It will help us understand how to fight against cosmic radiation in a more effective way. Studying the effects of microgravity on the human body and the reduction of cosmic radiation will provide us with the scientific data and information crucial for the next phase of the Deep Space project, a journey to Mars. The explanation for building a space station that orbits around the Moon can be found in energy efficiency and harsh deep space conditions that were mentioned before. As the name of the project says, it will be like a gate to other space missions. It will be used as a service center, as well as a starting point for profound space transport of spacecraft. It will be at the edge of the Earth’s strong gravity field, therefore launching spacecraft will need less energy. It could act as a receiving facility for initial examinations after deep space missions or lunar exploration. For example, samples that will be taken on Mars or Moon would be brought to lunar space stations for analysis, further study, and examination. One of the difficulties that humans have been encountering on space flights were
Artist’s impression of Deep Space Gateway. Credits: NASA the effects of microgravity. Microgravity is a condition in which people or objects appear to be weightless. One of the difficulties that humans have been encountering on space flights were the effects of microgravity. Microgravity is a condition in which people or objects appear to be weightless. The effects of it can be seen when astronauts and objects float in space. The term “micro” means “very small”, so sometimes using the word “zero gravity” can be misleading. Microgravity affects humans in a way of weakening their bones, muscles, cardiovascular system and more. Because we are used to Earth’s gravity, we evolved to withstand the “pull” by using it for quite a lot of functions in our body, thus living in an almost no gravity environment is not quite natural for us. Hence, due to the weakening of our body on a deep space mission, along with the cosmic radiation, a spaceship without particular protection quickly becomes a lethal environment to spend time in. The space station will be separated into three parts, each with its unique purpose.
The first will be a power and propulsion bus, the second a habitat module and the third a logistic module. The power bus will generate electricity which will provide the station with the power to change its orbit around the Moon when it’s necessary. The propulsion bus will generate high power electric propulsion needed for maneuvers, and this will also be used to keep the orbit around the Moon stable. The habitat module will be made for humans. It will have airlocks with Environmental Control and a Life Support System which will provide astronauts with natural living conditions. Therefore, they will not need to wear heavy space suits. The airlock will be a separate module which can be detached and sent on to a subsequent mission and will also be used for spacewalks. The last part of the station will be the logistic module. It will contain essential equipment for experiments and scientific research. The logistic module on the ISS is being used for delivering cargo to the Earth or to the ISS itself. Compared to it, DSG will probably also have a robotic arm to secure the incoming cargo, which will make the rendezvous successful.
49
Tech lab
T
he ISS is constantly inhabited. However, unlike the ISS, by current plans, the DSG would be visited once annually for 30 to 60 days by a four-member crew. It will be made to work even when there are no astronauts present in the station to conduct various experiments, especially when it is closer to the Moon. What is interesting about the DSG is that it will be placed into special points in space, where the gravitational attraction of the Moon and the Earth is balanced. This will allow it to follow the “near rectilinear halo orbit” (NRHO). That very orbit would provide the DSG with the particular fast movement near the Moon, and it would repeatedly sweep low over one pole. These orbits are only quasi-stable, so in order to maintain the orbit, some adjustments would be crucial, like an extra propulsion in critical moments. It is suggested that the use of solar sails may be the best solution, but regarding the final decision, we have to wait for the agreement of all project participants. This will be quite a demanding mission. The technology that will be used in building such a gateway is quite more advanced than the one used for building the ISS. Making the suits lighter and more resistant to cosmic rays will be one of the most difficult tasks. Also, new radiation shielding materials for the spacecraft, that will carry the crew and parts of the lunar space station to the Moon’s orbit are being tested, and some will be used for the construction of the spacecraft and the station. It took us decades of research and experiments, providing us with a significant and important knowledge that was essential to take the giant leap forward towards the space exploration. Now, when the leap is made, we will try to continue exploring the deep space, and hold the gate open.
50
Robotic arm on the ISS secures the incoming SpaceX Dragon capsule delivering supplies. Credits: NASA
The Deep Space Gateway holds high hopes as a successful phase of the deep space exploration, yet only the unity of the people and time will tell that.
International Space Station. Credits: ESA
40th edition
Carbon NanotubeBased Gas Sensors and Electronic Noses Author: József Mák
CARBON NANOTUBES
C
arbon nanotubes (CNTs) are sheets of graphene rolled up into a cylinder. The two main types are single-walled (SW) CNTs formed by a graphene sheet’s two sides bonding together into a cylinder and multi-walled (MW) CNTs which are basically two or more SWCNTs stacked into each other. (For an illustration see fig. (1.)) CNTs have many interesting properties such as excellent thermal and electric conductivities [2] or extreme mechanical strength [3] that make them useful not only for the electronics industry as a conducting material with low thermal dissipation but also as components of highly durable composite materials. [4] Another interesting feature is that the angle in which the graphene sheet rolls up determines the bandgap of a tube that can range from 0 to 2 eV meaning that the CNT can be metallic or semiconducting. [5] Although production costs of CNTs were initially rather expensive, it has scaled down to $2 per gram [6] allowing for possible future commercialization of CNTbased products. However, current production processes yield a statistical mixture of semiconducting and metallic CNTs and their separation still remains a challenge that considerably increases their price. Because of this, a lot of effort has been put into scalable and simple separation techniques, but industry grade separation is still an unsolved problem and the main bottleneck for the commercialization of CNT-based electronics including most of the CNT-based sensors. [7], [8], [9]
CARBON NANOTUBEBASED GAS SENSORS
CNT
s have a surface area as high as 1600m2 g-1 [10] and are thus good adsorbers of gases and vapors from the environment. In a semiconducting CNT, adsorbing an electron donor (eg. NH3 ) or an acceptor (eg. O2 ) shifts the Fermi level up or down and depending on whether the CNT is n- or p- type it either decreases or increases conductivity. The two setups that are most used are either a chemiresistor or a CHEMFET where usually a random CNT network is solution-deposited by spin, dip or spray coating, drop-casting or inject/ screen printing [11] on an interdigitated electrode structure (see fig. (2)) that was either thermally evaporated onto a substrate or chemically etched into a PCB using standard photolithography in both cases. This also makes this technology possibly cheap, because it is enough to disperse only a few μgs of the highly adsorbing CNTs in a solvent [12], drop it onto a surface and let the solvent evaporate; this eliminates the need for using high temperatures and vacuum in contrast to traditional silicon technology. Because of the high surface-to-mass ratio of the CNTs, it is possible to produce highly sensitive sensors that have an effective sensing area on the order of a few cm2s.
CNT-based sensors are often extremely sensitive down to a few ppm or ppb [13], [14], [15] level, but they face the problem
that they usually react to a wide range of gases, that is they are not selective. [16] This can possibly be circumvented by a manifold of different functionalization methods including the surface coating of the CNTs [17] or mixing metal oxide nanowires [18] or metallic nanoparticles [19] into the CNT network among others. These sensors still react to several substances, hence came the idea to mimic the mammalian olfactory system which contains several olfactory cells, each reacting to a range of stimuli and the differentiation of substances is done by the brain based on the pattern of activated cells. This led to the idea of the electronic nose [20], [21] where an array of differently functionalized chemiresistors or CHEMFETs were used in one sensor and PCA or some machine learning algorithm was used to recognize different substances based on the generated response pattern. Some existing realizations include but are not limited to an inkjet-printed solution for volatile organic compounds (VOCs) that are markers of possibly pathological biological function in humans [22] or that are important in industry, [23] a sensor array for CO detection, a sensor for explosive materials [24] and an e-nose to differentiate between CO, CO2 , NH3 and ethanol. [25] As a closing of the main discussion of this article, two devices are presented in slightly more detail that represent the state-ofthe-art in the field of CNT-based gas sensors well. Both sensors here discussed are relatively small having a surface area on the order of a few cm2s.
REFERENCES [1] Image downladed in dec. 2017. [Online]. Available: https://upload.wikimedia.org/wikipedia/commons/b/bc/Multi-walled Carbon Nanotube.png [2] C. N. R. Rao, B. Satishkumar, A. Govindaraj, and M. Nath, “Nanotubes,” ChemPhysChem, vol. 2, no. 2, pp. 78–105, 2001. [3] L. Liang, C. Gao, G. Chen, and C.-Y. Guo, “Large-area, stretchable, super flexible and mechanically stable thermoelectric films of polymer/carbon nanotube composites,” Journal of Materials Chemistry C, vol. 4, no. 3, pp. 526–532, 2016. [4] J. N. Coleman, U. Khan, W. J. Blau, and Y. K. Gun’ko, “Small but strong: a review of the mechanical properties of carbon nanotube–polymer composites,” Carbon, vol. 44, no. 9, pp. 1624–1652, 2006. [5] J. W. Wilder, L. C. Venema, A. G. Rinzler, R. E. Smalley, and C. Dekker, “Electronic structure of atomically resolved carbon nanotubes,” Nature, vol. 391, no. 6662, pp. 59–62, 1998. [6] Website of a cnt manufacturer; last visited dec. 2017. [Online]. Available: https://ocsial.com/en/material-solutions/tuball/ [7] W. G. Reis, R. T. Weitz, M. Kettner, A. Kraus, M. G. Schwab, ˇ Z. Tomovi´c, R. Krupke, and J. Mikhael, “Highly efficient and scalable separation of semiconducting carbon nanotubes via weak field centrifugation,” Scientific reports, vol. 6, p. 26259, 2016. [8] M. S. Tang, E.-P. Ng, J. C. Juan, C. W. Ooi, T. C. Ling, K. L. Woon, and P. L. Show, “Metallic and semiconducting carbon nanotubes separation using an aqueous two-phase separation technique: a review,” Nanotechnology, vol. 27, no. 33, p. 332002, 2016. [9] J.-D. R. Rocha, R. F. Ashour, L. M. Breindel, R. C. Capasse, and B. Zeghum, “Single-walled carbon nanotube separations using simple metal ionic
51
Tech lab
I
n [26], a Cl2 sensor was created with a sensitivity down to ppb level. Cl2 is a possible environmental pollutant extensively used in industry that can lead to intoxication in a leakage concentration of 500 ppb thus it is vital to control its presence. The group in this study non-covalently bound hexadeca-fluorinated copper phthalocyanine to p-type SWCNTs. Their X-ray photoelectron spectroscopy studies revealed that the strongly oxidizing electron acceptor Cl2 withdrew electrons from the SWCNT through the metal ions in the covering functional layer thus increasing hole concentration and along with it conductivity. Functionalized CNTs were deposited by drop-casting onto a glass substrate with gold electrodes and resulting sensors showed a well-detectable response to 100 ppb Cl2 concentration. (Commercial chlorine sensors have a detection limit of the same order of magnitude, see e.g. [27].) Additionally, they showed a highly reduced response towards other oxidizing agents NO and NO2 and the reducing agent NH3 demonstrating a reasonable selectivity. However optimal performance and a reversible operation were measured at 150°C making the use of a built-in heater in a future commercial product necessary.
VOC
s such as acetone, cyclohexane, ethanol, etc. are important biomarkers of human health secreted through the skin, or into feces, urine, saliva, breath etc. In [28] a group used an array of three sensors, each using MWCNTs with a different functionalization layer sensitive to different substances to build an electronic nose. An interesting finding was that one of the sensors could detect acetone, down to 6 ppm whereas the other could detect cyclohexane down to 1.5 ppm. Acetone is present in the breath of untreated diabetic patients whereas cyclohexane is a marker of certain types of cancers. This detection limit is in competition with photoionization detectors for VOCs, however at a reduced size, energy consumption and possibly much less manufacturing cost. [29] The sensor array of three sensors were able to discriminate between acetone, methanol, butanol, ethanol, propanol, toluene, pentane, cyclohexane,
and could preserve their selectivity in the presence of 50% water vapor that is to be expected in a human volatome although the sensitivity decreased as water is also competing to be adsorbed onto the sensor surface. The sensing mechanism among others can be explained by the capability of the capability of the functional groups of the molecules attached to the CNTs to bind target molecules. For a detailed discussion of the sensing mechanism see the reference article.
OUTLOOK
I
n this article, the potential of CNTbased technology for production of highly sensitive, lightweight and small-sized gas sensors was presented that are compatible with or even outperforming their commercial counterparts. Also, their production protocol allows for possibly cheap manufacturing by simply dissolving the CNTs in a detergent and using spray coating or some other solution coating technique, whereas the price of the purified, high-quality raw material itself can be compensated for by the few μgs of nanotubes needed per sensor. Taking these into account, despite the challenges to be faced by CNT gas sensors such as selectivity, they remain highly promising for the future’s easily manufacturable high sensitivity and selectivity gas detectors. For more information about CNT based gas sensors and electronic noses, check the extensive reviews in [11], [12]. CNTs also allow for some other, long-envisioned technological advances such as flexible and/or wearable electronics because of their advantageous mechanical properties, see e.g. [30]. (See fig. (3) for an illustration.) Another promising field of application of CNTs would be in transparent electrodes e.g. for organic solar cells [31] or touch screens [32] to replace the rigid and relatively expensive indium-tin-oxide electrodes currently in use. These other possible advancements should also raise interest for the electrical engineering community towards the development of CNT electronics and the electrical engineering students reading this article are encouraged to dwell into this research area as B.Sc./M.Sc./Ph.D. theses.
Fig. 3. Figure showing a CNT-based sensor for detecting explosives and nerve gases fabricated on a flexible substrate.
Fig. 1 MWCNT consisting of three SWCNTs stacked into each other. Image is taken from [1].
salt additives in gel-based chromatography,” ECS Journal of Solid State Science and Technology, vol. 6, no. 6, pp. M3148–M3154, 2017. [10] M. Cinke, J. Li, B. Chen, A. Cassell, L. Delzeit, J. Han, and M. Meyyappan, “Pore structure of raw and purified hipco single-walled carbon nanotubes,” Chemical Physics Letters, vol. 365, no. 1, pp. 69–74, 2002. [11] K. Chen, W. Gao, S. Emaminejad, D. Kiriya, H. Ota, H. Y. Y. Nyein, K. Takei, and A. Javey, “Printed carbon nanotube electronics and sensor systems,” Advanced Materials, vol. 28, no. 22, pp. 4397–4414, 2016. [12] M. Meyyappan, “Carbon nanotube-based chemical sensors,” Small, vol. 12, no. 16, pp. 2118–2129, 2016. [13] J. Li, Y. Lu, Q. Ye, M. Cinke, J. Han, and M. Meyyappan, “Carbon nanotube sensors for gas and organic vapor detection,” Nano letters, vol. 3, no. 7, pp. 929–933, 2003. [14] P. G. Collins, K. Bradley, M. Ishigami, and d. A. Zettl, “Extreme oxygen sensitivity of electronic properties of carbon nanotubes,” science, vol. 287, no. 5459, pp. 1801–1804, 2000. [15] L. Valentini, I. Armentano, J. Kenny, C. Cantalini, L. Lozzi, and S. Santucci, “Sensors for sub-ppm no 2 gas detection based on carbon nanotube thin films,” Applied Physics Letters, vol. 82, no. 6, pp. 961–963, 2003. [16] D. R. Kauffman and A. Star, “Carbon nanotube gas and vapor sensors,” Angewandte Chemie International Edition, vol. 47, no. 35, pp. 6550–6570, 2008.
52
40th edition
Fig. 2. Figure showing a CNT-based sensor. Cr/Au interdigitated electrode structure was evaporated on an alumina substrate followed by the evaporation of a CNT-SnO2 sensing layer. The 300nm thick functional layer on top is highly transparent and is barely visible by giving a darker shade to the substrate
[17] P. Qi, O. Vermesh, M. Grecu, A. Javey, Q. Wang, H. Dai, S. Peng, and K. Cho, “Toward large arrays of multiplex functionalized carbon nanotube sensors for highly sensitive and selective molecular detection,” Nano letters, vol. 3, no. 3, pp. 347–351, 2003. [18] O. Lupan, F. Sch¨utt, V. Postica, D. Smazna, Y. K. Mishra, and R. Adelung, “Sensing performances of pure and hybridized carbon nanotubes-zno nanowire networks: A detailed study,” Scientific Reports, vol. 7, no. 1, p. 14715, 2017. [19] M. M. Rana, M. M. Rana, D. S. Ibrahim, D. S. Ibrahim, M. Mohd Asyraf, M. Mohd Asyraf, S. Jarin, S. Jarin, A. Tomal, and A. Tomal, “A review on recent advances of cnts as gas sensors,” Sensor Review, vol. 37, no. 2, pp. 127–136, 2017. [20] P. Pelosi and K. Persaud, “Gas sensors: towards an artificial nose,” in Sensors and sensory systems for advanced robots. Springer, 1988, pp. 361–381. [21] K. C. Persaud and K. C. Persaud, “Towards bionic noses,” Sensor Review, vol. 37, no. 2, pp. 165–171, 2017. [22] P. Lorwongtragool, E. Sowade, N. Watthanawisuth, R. R. Baumann, and T. Kerdcharoen, “A novel wearable electronic nose for healthcare based on flexible printed chemical sensor array,” Sensors, vol. 14, no. 10, pp. 19 700–19 712, 2014. [23] S. F. Liu, L. C. Moh, and T. M. Swager, “Single-walled carbon nanotube–metalloporphyrin chemiresistive gas sensor arrays for volatile organic compounds,” Chemistry of Materials, vol. 27, no. 10, pp. 3560–3563, 2015. [24] S. Kaur, A. Kumar, J. K. Rajput, P. Arora, and H. Singh, “Sno2—glycine functionalized carbon nanotubes based electronic nose for detection of explosive materials,” Sensor Letters, vol. 14, no. 7, pp. 733–739, 2016. [25] A. Abdelhalim, M. Winkler, F. Loghin, C. Zeiser, P. Lugli, and A. Abdellah, “Highly sensitive and selective carbon nanotube-based gas sensor arrays functionalized with different metallic nanoparticles,” Sensors and Actuators B: Chemical, vol. 220, pp. 1288–1296, 2015. [26] A. K. Sharma, A. Mahajan, R. Saini, R. Bedi, S. Kumar, A. Debnath, and D. Aswal, “Reversible and fast responding ppb level cl2 sensor based on noncovalent modified carbon nanotubes with hexadecafluorinated copper phthalocyanine,” Sensors and Actuators B: Chemical, vol. 255, pp. 87–99, 2018. [27] Online documentation of a commercial Cl2 sensor; website visited in in dec. 2017. [Online]. Available: https://www.draeger.com/Products/Content/x-am-5600-pi-9041101-en-us.pdf [28] S. Nag, A. Sachan, M. Castro, V. Choudhary, and J. Feller, “Spray layer-by-layer assembly of poss functionalized cnt quantum chemo-resistive sensors with tuneable selectivity and ppm resolution to voc biomarkers,” Sensors and Actuators B: Chemical, vol. 222, pp. 362–373, 2016. [29] B. Szulczyński and J. Gebicki, “Currently commercially available chemical sensors employed for detection of volatile organic compounds in outdoor and indoor air,” Environments, vol. 4, no. 1, p. 21, 2017. [30] W. Jayathilaka, A. Chinnappan, and S. Ramakrishna, “A review of properties influencing the conductivity of cnt/cu composites and their applications in wearable/flexible electronics,” Journal of Materials Chemistry C, vol. 5, no. 36, pp. 9209–9237, 2017. [31] I. Jeon, C. Delacou, A. Kaskela, E. I. Kauppinen, S. Maruyama, and Y. Matsuo, “Metal-electrode-free window-like organic solar cells with p-doped carbon nanotube thin-film electrodes,” Scientific reports, vol. 6, p. 31348, 2016. [32] H.-C. Chu, Y.-C. Chang, Y. Lin, S.-H. Chang, W.-C. Chang, G.-A. Li, and H.-Y. Tuan, “Spray-deposited large-area copper nanowire transparent conductive electrodes and their uses for touch screen applications,” ACS applied materials & interfaces, vol. 8, no. 20, pp. 13 009–13 017, 2016.
53
Tech lab
Top 10 Applications for Smartphones Author: Tarik Omerćehajić
1. 1Password 1Password is a capable password manager for storing your password and making an easier sign-in on multiple devices. The app is also coming with password generator for creating an extra layer of security. If you have a problem remembering all your password, this is the perfect app for you.
2. Mint If you are one of those people who use their credit card too often and lose track on your spendings you just might love this app. You can track all your finances in one place from multiple sources, get bill reminders and saving tips. Perfect for your wallet and it’s free :)
3. Waze Navigation If you live in a large city and have traffic issues we might have a solution for you. Waze is the perfect way to find optimal route to your home and avoid any traffic problem. You can also become part of community reporting traffic problems and helping other people get faster and safer to their homes.
Mint Waze Navigation
4. Snapseed Need to retouch your photo on the go? You don’t have a laptop near you? Don’t worry, Snapseed is a powerful app with a lot of options for making your photos perfect. You can crop your images, add various filters, change exposure and lot of other options for the best result.
5. Evernote One of the best productivity apps you can download. Add tasks, organize your day, take notes and share them with your friends and lot more using one app on multiple devices. Evernote is available on every platform from your smartwatch to your laptop and will definitely help you stay on track.
Snapseed
54
40th edition
6. Tiny Scanner Turn your mobile phone into a powerful scanner and digitize your documents in no time. Take a photo, wait for the app to process it, check it and save it. Simple as that. Youâ&#x20AC;&#x2122;ll get your file as an image or PDF ready for further use.
7. Google Keep If you are a Google fan then you have probably heard about Google Keep. Perfect way for storing all your text, photo or audio notes, making a checklist, quick drawings and setting reminders all within one app. And the best part is, you can access all these files on any other web-connected device.
8. Medium If you enjoy reading blogs Medium is a must have app for you. Explore latest tech posts, inspirational stories and productivity tips from best writers all around the globe. You can also submit your own story and share it with a more than a million of active users.
Tiny Scanner
9. Duolingo Duolingo is a perfect app for learning new languages on the go. Ads free interface and rich content will quickly help you learn new words and phrases for Spanish, French, German and other languages.
Duolingo
10. TED TED Talks offer videos on various topics within the research and practice of science and culture, often through storytelling. Using TED app for your mobile devices you can watch more than 2000 different videos, listen to the TED podcast or download video or audio for offline playback.
TED
55
Tech lab
The world of fiber optic networks Author: Alexandru Ene
T
he usage of fiber-optic networks has started in 1880 when Alexander Graham Bell and one of his assistants managed to create the Photophone, which was using fiber-optic communications. After being developed a few decades ago, its purpose has been to send huge amounts of data traffic within a country. The concept has soon revolutionized the whole telecommunications industry in such matter that is reflected today. Let’s mention the landline telephones which use a wired cable to carry voice sounds into a socket, from which it is taken and sent by another cable to the local telephone exchange. Another example is the cell phones which work by sending and receiving information using invisible radio waves. Nevertheless, fiber optic cables work in another manner, by sending information coded in a beam of light down a glass or plastic pipe. Within each cable, there is at least one strand of glass. The center of each strand is called the core and each core is surrounded by a layer of glass called the cladding. These data are transmitted in the form of light particles or photons. Both the glass fiber core and the cladding have a different refractive index such that the incoming light is bent at a certain angle. The light signals are reflected off the core and the cladding, reflecting back and forth, representing the process of total internal reflection.
56
Let’s mention some interesting aspects of the types of fiber-optic cables. The first cabling design we will describe is also the simplest one and it is called single-mode optical fiber. Its core has only 5-10 microns in diameter, showing how thin it is. Within the single-mode optical fiber, all signals travel down the middle without bouncing off the edges. The most known devices that carry signals with this type of optical-fiber cable are the telephones, the Internet or the TV cable. All single-mode fibers are enclosed into a huge bundle which can send information over 60 miles. The second type of optical fiber is called the multi-mode and their main use is to link together computer networks. Despite the fact that the multi-mode optical fiber is around 10 times bigger than the single-mode one, it has a big disadvantage, that it can send information over short distances only. The light beams can travel through the core over multiple different modes. Next, we will continue to talk about the advantages of fiber-optic cables. First of all, fiber optics have high capacity, with the amount of bandwidth easily being able to surpass the one of a copper cable with similar thickness. Standard fiber optic cables are around 10 – 100 Gbps. Then, we have to mention that signal boosters don’t need to be amplified a lot since light gives the opportunity to go through prolonged distances within a fi-
ber cable and without losing its strength. The last significant advantage is that fiber is less affected by interference. Electromagnetic interference is one of the issues and it perturbs the network cable which is obliged to use shielding. However, this is not enough when we are dealing with multiple cables being tightened together in close proximity. All these issues are mostly avoided due to the physical properties of glass and fiber cables. Even though a certain amount of people don’t think this technology has got many applications, fiber optics is quite useful since it allows us to deliver different types of information using light beams. People can’t really see that optical fiber networks are very common since they are able to work far beneath us or within the walls. As we mentioned before, computer networks use multi-mode optical fiber due to its ability to transmit data and provide high bandwidth. Broadcasting and electronics also use fiber optics in order to gain better connection and performance. Means of communication, signal transfer, and temperature sensing are all used when optical fibers are implemented in military equipment and space industries. They are also used in various medical instruments, for biomedical sensors, surgical microscopy, light therapy. All these technologies use the optical fibers in such a manner that they are not exposed to us.
40th edition
Microfluidic Sensing Strategies for Biological and Chemical Pointof-Care Testing Author: József Mák
MOTIVATION: THE NEED FOR INTEGRATED BIOLOGICAL AND CHEMICAL SENSORS
A
ccording to present practice, biological and chemical testing is performed in fixed-location laboratories which allows for high-precision measurements. Making the lengthy and expensive tests in far-from-field places, however, might be infeasible in developing countries e.g. due to lack of sufficient material funds or large distances. Even in highly developed locations, it can be desirable in acute medical cases to quickly and reliably perform tests on biological samples at the patient’s bedside. [1], [2] Such testing is best performed by highly reliable and cheap throw-away sensor chips, such as a device combining fluorescent technology with again smartphones for house-hold prognosis of heart failure [3] or a recently a patent of an integrated microfluidic lab-on-a-chip device for cell analysis, immunological response and glucose monitoring has been accepted. [4] Not only medicine would benefit from cheap point-of-care sensors but they could be useful also in other areas. These include e.g. food testing, [5] environmental monitoring, [6] biomedical research, [7] and other areas of biochemical process monitoring [8] such as winemaking. [9]
Although other solutions have also been proposed, [10], [11] microfluidic platforms are possibly 3D printable, use mainly cheap materials and many analysis types may be integrated on a single chip [12], [13] thus these devices present a true advance towards cheap throw-away sensors. Developing the often electricity based sensing and information processing part of these devices might be of interest to the electrical engineering community, and this article hopes to raise interest, by giving a peek into this exciting technological field.
SENSING STRATEGIES FOR BIOLOGICAL AND CHEMICAL POINTOF-CARE TESTING Next, we are going to look at microfluidic sensors through three specific examples. If you want more, see the excellent review [13]. In [14] an electrolytic cell was separated with an anion-exchange membrane (AEM) and the membrane was covered with single-stranded DNA (ssDNA) to which ssDNA from pathogens can specifically bind. This changes the way current flows through the cell, and this way, the platform was capable of specific and rapid detection of oral cancer, Dengue virus, Brucella bacteria and E. Coli markers. The used ion-exchange membranes are industry standards in elec-
trodialysis based purification processes and as parts of electrochemical fuel cells, [15], [16] are relatively cheap, and accessible in large quantities. Next up are organic semiconductors (semiconducting plastic materials) that have already been used in organic solar cells, [17] light-emitting diodes, [18] thin-film transistors (TFT) for wearable electronics, [19] or flexible displays. [20] Organic semiconducting devices are easily made e.g. by loading the dissolved material into a simple printer that prints the desired electronic component (so-called inkjet- printing), whereas their mechanical flexibility and durability [21] make them appropriate materials for use in medical sensors. In [22] a group has fabricated an organic TFT enclosed in a microfluidic channel, covered it with collagen, so that living cells were able to adhere to the transistor, and monitored cell growth and vitality by passing the transistor-controlling field through the cell layer in between. This platform enables e.g. real-time in vivo testing of the effect of drugs on cell cultures, which is important in medical research. Because only transparent materials were used for device fabrication, the cell culture can also be monitored with a microscope. Finally, let us take a look at nanochannels and nanopores, that are nanometer-wide openings in solid structures. One of their most interesting applications is as new-generation sequencing devices for DNA and proteins as they allow for rapid, label-free, amplification-free and possible
REFERENCES [1] R. Sista, Z. Hua, P. Thwar, A. Sudarsan, V. Srinivasan, A. Eckhardt, M. Pollack, and V. Pamula, Lab on a Chip 8, 2091 (2008). [2] A. K. Yetisen, M. S. Akram, and C. R. Lowe, Lab on a Chip 13, 2210 (2013). [3] M. You, M. Lin, Y. Gong, S. Wang, A. Li, L. Ji, H. Zhao, K. Ling, T. Wen, Y. Huang, et al., ACS nano (2017). [4] N. Patel, \Dynamic lab on a chip based point-of-care device for analysis of pluripotent stem cells, tumor cells, drug metabolites, immunological response, glucose monitoring, hospital based infectious diseases, and drone delivery point-of-care systems,” (2016), uS Patent App. 15/009,793. [5] Y. Yun, M. Pan, G. Fang, Y. Yang, T. Guo, J. Deng, B. Liu, and S.Wang, Sensors and Actuators B: Chemical 238, 32 (2017). [6] N. A. Meredith, C. Quinn, D. M. Cate, T. H. Reilly, J. Volckens, and C. S. Henry, Analyst 141, 1874 (2016). [7] E. K. Sackmann, A. L. Fulton, and D. J. Beebe, Nature 507, 181 (2014). [8] S. L. Okon and N. J. Ronkainen, in Electrochemical Sensors Technology (InTech, 2017). [9] D. Albanese, C. Liguori, V. Paciello, and A. Pietrosanto, IEEE Transactions on Instrumentation and Measurement 60, 1909 (2011). [10] F. Gorjikhah, S. Davaran, R. Salehi, M. Bakhtiari, A. Hasanzadeh, Y. Panahi, M. Emamverdy, and A. Akbarzadeh, Articial cells, nanomedicine, and biotechnology 44, 1609 (2016).
57
Tech lab low-cost determination of the order of amino acids in the protein or the order of nucleotides in the DNA. [23], [24] The Oxford MinION nanopore sequencer [25], [26], [27] is a commercial device, that uses an artificial insulating membrane, penetrated by protein nanopore (a protein molecule, that is like a channel inside) which also has a motor part. The motor protein disassembles the DNA strands, pushes one of them through the pore, and as the bases in the DNA translocate one-by-one, they block an ionic current induced by an external electric field. From the change of ionic current through the pores, the sequence of DNA residues can be determined. Since its appearance, the MinION sequencer has been tested on the International Space Station, [28] and was used to sequence the human genome. [29] Although more accurate platforms exist, the Oxford MinION sequencer achieves what it can at a price accessible to basically any laboratory around the world and with a portable device the size of a mobile phone, that can be connected to a laptop through a USB stick. [27] For other devices in nanopore-based DNA sequencing, see [30] and for recent advances in nanopore-based protein sequencing, see e.g. [31], [32]
A microfluidic circuit
OUTLOOK
I
n the previous paragraphs, a view on a selection of micro- and nanofluidic devices have been offered to show that this technology can possibly satisfy the need for portable, rapid and low-cost chemical and biological sensors because they are fabricable from cheap, possibly
solution-processable or 3D printable materials. These examples were randomly picked highlights of the field, but several more, substantially different solutions exist. The interested reader is kindly invited to read some of the reviews previously cited in the text.
[11] A. Tereshchenko, M. Bechelany, R. Viter, V. Khranovskyy, V. Smyntyna, N. Starodub, and R. Yakimova, Sensors and Actuators B: Chemical 229, 664 (2016). [12] A. A. Yazdi, A. Popma, W. Wong, T. Nguyen, Y. Pan, and J. Xu, Microfluidics and Nanofluidics 20, 50 (2016). [13] J. P. Lafleur, A. J onsson, S. Senkbeil, and J. P. Kutter, Biosensors and Bioelectronics 76, 213 (2016). [14] S. Senapati, Z. Slouka, S. S. Shah, S. K. Behura, Z. Shi, M. S. Stack, D. W. Severson, and H.-C. Chang, Biosensors and Bioelectronics 60, 92 (2014). [15] Y. Tanaka, Ion exchange membranes : fundamentals and applications (Elsevier Ltd, Amsterdam, Netherlands, 2015). [16] H. Strathmann, Ion-exchange membrane separation processes, Vol. 9 (Elsevier, 2004). [17] Z. Zheng, O. M. Awartani, B. Gautam, D. Liu, Y. Qin, W. Li, A. Bataller, K. Gundogdu, H. Ade, and J. Hou, Advanced Materials 29 (2017). [18] T.-A. Lin, T. Chatterjee, W.-L. Tsai, W.-K. Lee, M.-J. Wu, M. Jiao, K.-C. Pan, C.-L. Yi, C.-L. Chung, K.-T. Wong, et al., Advanced Materials 28, 6976 (2016). [19] Y. Takeda, K. Hayasaka, R. Shiwaku, K. Yokosawa, T. Shiba, M. Mamada, D. Kumaki, K. Fukuda, and S. Tokito, Scientic reports 6, 25714 (2016). [20] M. Mizukami, S.-I. Cho, K. Watanabe, M. Abiko, Y. Suzuri, S. Tokito, and J. Kido, IEEE Electron Device Letters 39, 39 (2018). [21] T. Someya, S. Bauer, and M. Kaltenbrunner, MRS Bulletin 42, 124 (2017). [22] V. F. Curto, B. Marchiori, A. Hama, A.-M. Pappa, M. P. Ferro, M. Braendlein, J. Rivnay, M. Fiocchi, G. G. Malliaras, M. Ramuz, et al., Microsystems & Nanoengineering 3, 17028 (2017). [23] W. Chen, G.-C. Liu, J. Ouyang, M.-J. Gao, B. Liu, and Y.-D. Zhao, Science China Chemistry , 1 (2017). [24] J. Zhou, Y. Wang, L. D. Menard, S. Panyukov, M. Rubinstein, and J. M. Ramsey, Nature communications 8, 807 (2017). [25] Oxford MinION official website,â&#x20AC;? https://nanoporetech.com/, accessed: 2018-02-04. [26] M. Jain, H. E. Olsen, B. Paten, and M. Akeson, Genome biology 17, 239 (2016). [27] R. M. Leggett and M. D. Clark, Journal of experimental botany 68, 5419 (2017). [28] S. L. Castro-Wallace, C. Y. Chiu, K. K. John, S. E. Stahl, K. H. Rubins, A. B. McIntyre, J. P. Dworkin, M. L. Lupisella, D. J. Smith, D. J. Botkin, et al., Scientific reports 7, 18022 (2017). [29] M. Jain, S. Koren, K. H. Miga, J. Quick, A. C. Rand, T. A. Sasani, J. R. Tyson, A. D. Beggs, A. T. Dilthey, I. T. Fiddes, et al., Nature Biotechnology (2018). [30] S. Howorka and Z. Siwy, ACS nano 10, 9768 (2016). [31] G. Huang, K. Willems, M. Soskine, C. Wloka, and G. Maglia, Nature communications 8, 935 (2017). [32] Z. Dong, E. Kennedy, M. Hokmabadi, and G. Timp, ACS nano 11, 5440 (2017).
58
40th edition
TRANSHUMANISM - Threat and Aid Author: Károly Gabányi
Transhumanism is a way of thinking, trend or a movement, which, in essence, concerns itself with how the human body and mind can develop with the help of science and technology. That possible future form of human beings whose basic capacities are radically exceeded and cannot be identified obviously as human, is called posthuman. Many of the transhumanists would like to grow into posthuman. They want to reach intellectual heights, be resistant to disease and avoid aging. They want to perfectly control their body, their feelings and get into a new state of consciousness. [1] Natasha Vita-More, the chairman of the global transhumanist organization, says transhumanism has become a worldview that represents the currents in global society. Lines about transhumanism sound more like a beginning of a sci-fi novel and creating from people machines, hybrids that look not just unnatural but strange. Personally, I don’t think these posthumans are what we should become. But there is a fistful of positive examples, so let’s discuss the concretes.
Organ replacement Organ shortage is a big problem. 3-D printing is an experimental strategy to build bio-artificial body parts and organs. Earlobes, nose parts, miniature kidneys, and skin have already been printed. Except for printing, regenerating is also a subject of ongoing research. They try to boost healing by injecting stem cells, or by implanting tissues or organs that are bio-engineered in the lab. Implants have been tried successfully in humans, including knee cartilage, skin, blood vessels, urethras, windpipes, bladders. Researchers say transplanting artificially made complex solid organs like the heart, liver, lungs, and pancreas will be possible in five years, but this is still a big challenge for the regenerative medicine.
Smart skin When a limb is lost and replaced with prosthetics or the skin is left ruined and the receptors can no longer transmit information from the environment, smart skin can aid. The idea was to attach the smart skin to prosthetic limbs to restore the sense of touch. The smart skin is also soft and elastic, can detect humidity. Because our sense of touch varies throughout the body, smart skin sense of touch varies also. With a temperature profile, it can detect location specific tactile and thermal data too.
Brain Implant Scientists have developed a brain implant device which works like a pacemaker, it sends electrical pulses to aid the brain when it has a problem to store information but remains quiet when the brain is functioning well. The implant is still in experimental stage and the broad applicability is unknown, though this could help treat dementia, traumatic brain injuries and other memory damaged cases. Similar implants have been used to block abnormal bursts of activity in the brain, for example, in case of Parkinson’s disease and epilepsy.
There are a lot of people struggling with injuries and illnesses whose lives could be saved or aided by devices like these. Although this technology would mean huge advancements, people should be more conscious about transhumanism and implants. You should think about vulnerabilities what these devices carry and also that you can directly dependant from a machine, a vendor, or an organization, who can collect data from you and affect you.
59
Career section
Best Universities
Author: Anastasia Choluriara
With a long history of pioneering higher education, Europe is home to many of the world’s oldest and most prestigious universities, as well as of the most exciting and attractive student cities. Most graduate students at top engineering schools will tell you that they know plenty of people in their program from lower-ranked engineering schools, so it appears there are plenty of counterexamples that prove Europe is so Polymorphic and has a variety in education. Also, most professors will tell you that admission committees look at your research, your letters of recommendation, your coursework, your statement of purpose, and that the last thing on their minds is the ranking of your school. Here are the 5 best Engineering Universities in Europe which also have a Local Committee of EESTEC:
Technical University of Munich The Technical University of Munich is one of the most research-focused universities in Germany and Europe. This claim is supported by relevant rankings, such as the DFG-Förderranking (DFG Funding Rankings) or the research rankings of the Centrum für Hochschulentwicklung (CHE – Center for Higher Education Development). TUM was one of the three universities which were successful in obtaining funding in all three funding lines of the Excellence Initiative in 2006.
Delft University of Technology New method Gentle Driving of Piles TU Delft professor Andrei Metrikine is going to help contractors in making the pile installation process as effi-
60
cient as possible by means of testing a novel pile installation method with the project ‘Gentle Driving of Piles’. The proposed method is called ‘Gentle Driving of Piles’ for its envisaged capability to reduce the driving loads and the emitted (underwater) installation noise which is harmful to the environment.
40th edition
ETH Zurich Swiss Federal Institute of Technology People like Einstein and John von Neumann decided to do their studies at the ETH. Switzerland takes the top spot when rated according to the Global Innovation Index over several years. the world looks towards Switzerland when looking at AI and nanotechnology. The research programs at Swiss universities are the main reason.
Politecnico Di Milano Equivalent to a Master of Science trains professionals with solid engineering foundations, a good scientific approach and a broad range of technical and applied contents. The level of cultural education is raised during the first year by broadening the knowledge of advanced analysis methods, which in the second year are applied in specialisation subjects and a thesis. The first year is offered in the Milano Bovisa and Lecco campuses
with the same study plan.The specialization tracks cover a wide range of traditional and innovative mechanical engineering subjects: Milano Bovisa Campus: Production Systems, Mechatronics and robotics, Virtual prototyping, Internal Combustion Engines and Turbomachinery, Advanced Mechanical Design, Advanced Materials and Technology, Ground Vehicles Lecco Campus: Mechanical Systems Design, Industrial Production Piacenza Campus: Machine Tools and Manufacturing Systems
RWTH Aachen University Highest Scientific Honor in Europe The European Research Council, ERC for short, promotes outstanding scientists in order to encourage excellent basic research and visionary projects as well as to discover new interdisciplinary fields of knowledge. The ERC Grants are considered the European benchmark for top research. Below you will find an overview of the ERC grant holders and their projects at RWTH Aachen.
61
Career section
Alumni story
Author: Milica Stefanović
“T
ell me more about your team working experience, do you have any, considering the fact that this is your first employment?”, the HR girl asked. I stopped for a moment and thought - how am I going to explain to her that my whole student life was one huge team working story?
“Ok, what about the project management, do you have some skills there?” Hah. “Yeap, sure, I do. In fact - I do have a lot”. And from that moment on, I started realizing: dear EESTEC, you brought me more than I could have ever imagined. All those daily activities in the corporate environment feel so familiar to me. I know and use things that no one expects me to - “Hey, you are an electrical engineer, how do you know about the SWOT analysis?!” And my English language skills? After only four months, they gave me the opportunity to lead the project in which I had a contact point from Finland. The guy enters my office, and my boss starts stuttering, and there you see a cloud above my head: thank you EESTEC, once again, for making me this fluent and self-confident. Sometimes I think how it would go if I were not an EESTECer. I am observing young people from my walks of life and I see them facing things that I got over a long time ago. I see them lacking some basic crafts and not managing to keep up with the tempo. They are missing opportunities because their presentational skills are weak, they usually take more time than I do to deliver analysis from different points of view and the most important thing - they do not feel the team’s needs. So then I understand: during all these years I was practicing for the real life. The combination of my academic value and non-formal education made me extraordinary. It is not just me, don’t get me wrong: EESTEC makes extraordinary people shine. EESTEC takes all of your sweat and tears and turns them into something golden and precious. And once you are done with your student life, you step into something new, yet not completely unknown. You realize you possess everything that is required and you apply it in every sphere of your life. So whenever someone asks me what is the most precious thing you gained from EESTEC, I say - life. I say - the appreciation for non-material things, the values I will never lose. Whenever someone asks me: “was it worth?”, I say - look at me standing in the spotlight, keeping EESTEC spirit deep inside me and hoping it will never fade away.
62
40th edition
The Nobel Prizes In 2017 Author: József Mák
S
ince 1901, the Nobel Prize award ceremony takes place every year on the 10th of December, the anniversary of Alfred Nobel’s death, who was the founder of the prize. The prizes for Physics, Chemistry, Medicine, Literature and Economics (existing since 1968 and founded by Sveriges Riksbank in the memory of Alfred Nobel) are given out in Stockholm, Sweden, whereas the Prize for Peace is awarded in Norway. Advancements for which the Nobel prizes are awarded are not just scientific curiosities. In many cases, they are given out directly for engineering solutions, or even if they are awarded for fundamental discoveries, by the time the prizes are given out, the result has reached engineering application. Inventions whose inventors were awarded a Nobel prize were the developers of wireless telegraphy (1909), spectroscopic methods (1924, 1944, 1981, 1994, 2005), particle detection and acceleration methods (1939, 1947, 1960, 1992), microscopic methods (1953, 1986), the transistor (1956), the laser (1964), holography (1971), radio astronomy (1974), experimental methods that allow the manipulation of single quantum systems composed of photons and ions and other experimental methods that led here (1989, 1997, 2012), the integrated circuit (2000), optical fibers and the CCD sensor (2009), the blue light-emitting diode (2014) and 2017’s measurement instrument, the LIGO detector, used in the detection of gravitational waves. In light of the above, let’s see what scientific contributions were found by the Nobel Committee to be ”1 for the greatest benefit to mankind” this year.
THE NOBEL PRIZE IN PHYSICS2 The Nobel Prize in Physics was awarded to Rainer Weiss, Barry C. Barish and Kip S. Thorne ”for decisive contributions to the LIGO detector and the observation of gravitational waves”. The field equations of general relativity3 (GR) connect the curvature of spacetime with the mass that resides in it. Around 100 years ago, Einstein has already noticed that the curvature of spacetime can propagate in the form of a wave, and the sources of this wave are accelerating masses whose motion is not spherically symmetric. A simple example would be a dumbbell rotating around an axis that is orthogonal to its axis of symmetry. The effect of these waves are extremely small but this didn’t stop many scientists from trying to build apparatuses for their detection, among others Rainer Weiss and Kip S. Thorne whose groups both designed interferometers for gravitational wave (GW) detection. After they joined forces in a project financed by the government of the United States, the Laser Interferometer Gravitational-Wave Observatory (LIGO) was founded with Barry C. Barish as its first director. The basic working principle of LIGO involves two observatories 3000 km from each other, each operating with two split beams of light that interfere with each other so that upon the distortion of spacetime coming from a GW, the interference pattern changes4. After continuously improving LIGO’s sensitivity, on Sept. 14. 2015 it detected a gravitational wave for the first time in human history. By fitting
thousands of pre-calculated waveforms from numerical simulations of GR, it is highly likely that the observed GW came from two colliding and merging black holes from 1.3 billion light years away from earth.5 Although the effect of GWs is very small, the radiated power upon e.g. black hole merger is still much higher than all the electromagnetic power radiated by stars in the visible universe, and ripples in spacetime travel through regions that are opaque to photons and other particles, upon which current observational methods reside. This means that observing GWs might provide fundamentally new information about the universe, about parts that so far have never been observed and also, they provide new means to test the limits of GR, that might make it possible to unify gravity with quantum theory.
THE NOBEL PRIZE IN CHEMISTRY6 The 2017 Nobel Prize in Chemistry was awarded to Jacques Dubochet, Joachim Frank and Richard Henderson ” for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution”. Although the electron microscope has been around since the 1930’s, imaging live material has been a problem. For example, to maintain the native form of biological molecules, they need to be surrounded by and contain water, that is forced to evaporate by the vacuum used in electron microscopy. On top of that for a high-resolution picture, the sensitive mole-
REFERENCES [1] From Alfred Nobel’s will. [2] For more information that is still easy to understand, see the advanced information page www.nobelprize.org/nobel prizes/physics/laureates/2017/ advanced-physicsprize2017.pdf[3] M. You, M. Lin, Y. Gong, S. Wang, A. Li, L. Ji, H. Zhao, K. Ling, T. Wen, Y. Huang, et al., ACS nano (2017). [3] For a light introduction that only requires basic calculus, check out www.youtube.com/watch?v=foRPKAKZWx8 response, glucose monitoring, hospital based infectious diseases, and drone delivery point-of-care systems,” (2016), uS Patent App. 15/009,793. [4] Rainer Weiss explains how the LIGO works in this video: www.youtube.com/watch?v=k2sBxrySg0o [5] ABBOTT, Benjamin P., et al. The basic physics of the binary black hole merger GW150914. Annalen der Physik, 2017, 529.1-2. is a mathematically light introduction for the public on black hole merger. [6] Advanced information: www.nobelprize.org/nobelprizes/chemistry/laureates/2017/advanced-chemistryprize2017.pdf [7] The image processing methods are available for the public in the SPIDER package at spider.wadsworth.org/spider doc/spider/docs/spider.html [8] Advanced information: www.nobelprize.org/nobelprizes/medicine/laureates/2017/advanced-medicineprize2017.pdf [9] In eucaryotic beings the cell nucleus contains the DNA that is first transcribed to mRNA which translocates through the cell nucleus membrane into the cytoplasm where it is translated into proteins. For a short video explaining gene expression, see www.youtube.com/watch?v=gG7uCskUO [10] A gene is denoted by small italic, whereas the protein product is written with capital letters. [11] For a video explaining the feedback loop, check out www.youtube.com/watch?v=M-Tdvu3N8dA
63
Career section cules have to bombarded by high-energy electrons that may easily destroy them. By gradually solving the difficulties, Henderson and colleagues have demonstrated in 1990 that at cryogenic temperatures, by spreading the electron dose over many copies of the same object and averaging over the snapshots, it is possible to generate a high-resolution image of a protein molecule without seriously damaging it. The above method only worked, when the protein was crystallized, which is in general very hard to achieve. Joachim Frank and his colleagues generalized the above method to imaging non-crystalline, possibly impurity-containing structures, among others by developing many of the mathematical methods for the image processing required to generate a 3D image of 2D snapshots7. Jacques Dubochet and colleagues’ contribution to this work was the development of a method that allows freezing a one-biological- molecule-thick layer of the sample in water without forming ice crystals that would strongly diffract the electrons used for imaging. This way, the biological material doesn’t dry out in the microscope’s vacuum, and it keeps its native form. The developed cryo-electron microscopy is able to generate images of biological molecules in their native state at atomic resolution. Molecules may be frozen in motion, for example, while an enzyme catalyzes a chemical reaction and molecular function can be studied, this way providing fundamentally new insights on the building blocks of living organisms.
THE NOBEL PRIZE IN PHYSIOLOGY OR MEDICINE8 The 2017 Nobel Prize in Physiology or Medicine was awarded to Jeffrey C. Hall, Michael Rosbash, and Michael W. Young “for their discoveries of molecular mechanisms controlling the circadian rhythm”. The main question tackled by the laureates was ”What makes us tick?”, namely, what mechanism is responsible for our circadian rhythm, the day and night cycles of the human body. Earlier studies have shown that in the fruit fly the mutation of a certain gene, later named period or per, destroys their circadian cycle. Jeffrey Hall and Michael Rosbash found that the concentrations of the mRNA and the protein product of this gene9 both oscillate with a 24 h period in the fruit fly brain. Michael Young found that the levels of the products of another gene, the timeless or time, also oscillates with a 24 h period.
64
As it turned out, PER10 negatively influences its own expression, but can only get back into the nucleus with the help of TIM. A third gene, also found by Young and his colleagues, was the double time that degrades PER in the cytoplasm, thereby slowing down its accumulation and negative self-regulation. This so-called Transcription-Translation Feedback Loop is closed by two other genes, clock and cycle, that positively influence the expression of PER and TIM, but are in turn negatively regulated by PER11. Later, other genes also have been found that extend and fine-tune this loop, such as the cry, that is activated by light and the product of which degrades TIM and stops PER from getting back to the nucleus. After the discoveries of the three laureates, circadian biology has developed into a vast research field. As the circadian rhythm helps regulate a wide spectrum of bodily functions from sleep patterns to body temperature, and circadian dysfunction is linked with a plethora of diseases, this research has a huge importance in health science.
OUTLOOK TO OTHER PRIZES Although not strictly in the field of natural science, it is worthwhile to take a quick look at the receivers of the rest of the Prizes. The Nobel Prize in Literature was awarded to Kazuo Ishiguro, a celebrated contemporary fiction writer among others, ”who, in novels of great emotional force, has uncovered the abyss beneath our illusory sense of connection with the world”. The Nobel Prize in Economic Sciences was received by Richard H. Thaler” for his contributions to behavioral economics”, namely the integration of psychological factors, such as limited rationality, social preferences, and lack of self-control into models of economic decision-making. Finally, the Nobel Peace Prize was awarded to the civil organization International Campaign to Abolish Nuclear Weapons (ICAN) that has taken a leading role in 122 United Nations states adopting the Treaty on the Prohibition of Nuclear Weapons in 2017.
40th edition
#POWERYOURFUTURE
eestec.net
65
EESTEC magazine
66