Popular Electronics

Page 1



Contents

10

An Arduino-Based Brain-Computer Interface

23

Teslapathic Mind Control For Your Car

30

38

23

60

10 50 3

technicacuriosa.com / Vol.1 • No.1

Open-sourcing the World of Neuroscience

OpenBCI Enables Mind Over Matter

30

Brainwaves Correct Robot Mistakes In Real Time . . . With A Little Help From Machine Learning

Energy Harvesting

38

55

Brain-Computer Interface

A World Of Free Energy Energy-Harvesting Circuits for Capturing and Harnessing Ambient Power BY MAURIZIO DI PAOLO EMILIO

50 SolePower

Unplugged, Battery-free, and Powered Exclusively by Human Motion

Wi-Fi

55

Wi-Fi Without Batteries

IoT’s Missing Link? BY JOHN SCHROETER

60

Exploiting FM Backscatter To Enable Connected Cities And More

BY SHYAM GOLLAKOTA

Arduino

72

Getting Started With Arduino

Which Board to Buy? BY JACK PURDUM, PHD

Computing

88

The Evolution Of Computing From Processor-centric To Data-centric Models

BY TRUNG TRAN


Contents

72 97

88 111 137

97

Raspberry Pi

Physical Activity Motivation Tool Using The Raspberry Pi Zero

BY SAI AND SRIHARI YAMANOOR

Ultrasonics

111

Popular Electronics Classics | $1,000 Contest

History of Technology

Ultrasonic Sniffer

120

On The Origins Of AND-Gate Logic

BY H.R. (BART) EVERETT

Haptics

127

The Future Of The Touchscreen Is Touchless

Haptics Brings the Sense of Touch to the Virtual World BY JOHN SCHROETER

Virtual Reality

DETAILS >>

137

Inside the Leap Motion VR Platform

The Fundamental Interface for VR/AR BY ALEX COLGAN

144

Will Virtual Reality Be The Most Unifying Technology Ever?

BY ERIC ROMO

Internet of Things

148 OpenVLC

148 4

120 technicacuriosa.com / Vol.1 • No.1

Realizing the Internet of Lights BY DR. QING WANG

152

Hello, World Wide Web Of Things

BY DOMINIQUE GUINARD AND VLAD TRIFA

160

IoT Attack Sparks Warnings From Law Enforcement

ST Provides Tools Needed to Implement Recommendations of FBI and Other Agencies


Contents 144

Radio Frequency Modulation Made Easy

BY DR. SALEH FARUQUE

175

Popular Electronics Classics

Pulse Modulation

Programmable Logic

184 184

Introducing viciLogic

Learning Practical Inegrated Circuit Design and Prototyping Online BY FEARGHAL MORGAN

Prototyping

188

Nano Dimension

Additive Manufacturing Opens New Doors for Electronics BY SIMON FRIED

Machine Learning

198

198

Link references throughout this magazine: Website link

GET

Video link

Purchase link

Free download link Facebook link

Audio link

BUY

Electronics 101

164

188

5

Twitter link Additionally, all text in blue is a live link

technicacuriosa.com / Vol.1 • No.1

Automatic And Transparent Machine Learning BY GANG LUO AND JOHN SCHROETER

Popular Electronics® is published by Technica Curiosa™, a division of John August Media, LLC. Subscriptions to all Technica Curiosa titles, including Popular Astronomy®, Mechanix Illustrated®, and Popular Electronics®, are free with registration at www.technicacuriosa.com/register. For editorial and advertising inquiries, contact john@technicacuriosa.com. Editor-in-Chief

John Schroeter

Associate Editor

Michaelean Ferguson

Art Director

Eduardo Rother

Design & Production

ER Graphics

Except where noted, all contents Copyright © 2017 John August Media, LLC. The right to reproduce this work via any medium must be secured with John August Media, LLC.



Guest Editor Karen Bartleson

Inspiring Tomorrow’s Engineers Today Karen Bartleson, IEEE President & CEO

T

omorrow’s world is being created right now by the engineers of today. Incredible inventions such as self-driving cars and personal care robots will soon be part of our daily lives, thanks to today’s engineers. Yet, what about tomorrow’s engineers? Who will they be? What will they invent? And what will their values be? Organizations like IEEE, the world’s largest technical professional society, and publications like Popular Electronics are here to inspire and equip young women and men to become the next generation of engineers. They will be the people who create astonishing products, solve global problems, and continue changing the face of the human race. They have grown up as digital natives. Texting, apps, mobility—these are the foundations of their world. Growing up in my world, I did not have any idea what “engineering” meant. To be honest, I had never even heard the term. But I was fascinated by how things work. A car was a marvel to me—put gas in the tank, turn the key, step on the pedal, and go! I asked my father, how does a car work? He said, girls don’t need to know that. It was a sign of the times. Or perhaps he did not know how a car worked, either. The more I learned about engineering, the more it interested me as it combined the elements of math, science, and problem solving to create useful things for the world. I was captivated. That is why engineering is so appealing to me, because we can really do what IEEE’s motto states: We can advance technology for the benefit of humanity. Every day, we—as engineers and scientists—improve the way that people communicate, work, travel, stay healthy, and entertain themselves. If you love

7

technicacuriosa.com / Vol.1 • No.1


Guest Editor Karen Bartleson

math and science, you can do amazing things for the world—and that is appealing to both women and men. IEEE strives to bring together—at a local level—those technological innovators who have a desire to learn, collaborate, and give back to their community. IEEE members— more than half of whom are outside of the United States—are a diverse, international community filled with technology professionals of every race, every creed, and every color. All are welcome, all are respected, regardless of nationality, ethnicity, religion, or gender. And all are contributing in countless ways to moving technology forward. Our world—despite the myriad advances in technology that the last century has brought with it—still needs to be changed for the better. Too many people do not have access to clean water, or reliable energy, or healthcare, or the internet. Not everyone has the opportunity to achieve their full potential through access to education. Technology can overcome these tough challenges. It always has. And at no other point in history have we had more opportunities available to us to improve our world. At IEEE, we do not focus on what divides people, but what unites us. We do not focus on where someone came from. We care about the ideas you bring to the collaborative work you in which are engaged. We do not erect borders—we overcome barriers. We do not discriminate—we embrace differences. We care about humanity and how the technology we are working on can benefit our global community. Engineers have been the people who have shaped human existence for eons. The next generation of scientists and engineers will be able to significantly and positively impact the lives of billions of people. In closing, I wish to inspire all of us to continue to use our abilities to help support one another and strive toward our shared mission with one of my favorite quotes from a famous U.S. statesman: “Never doubt that you are valuable, and powerful, and deserving of every chance and opportunity in the world to pursue and achieve your own dreams.” Within the global technical community, we are all valuable. Our abilities as individuals, coupled with our strength as collaborators, make us powerful. Together, we will pursue and achieve the dream of a better tomorrow for all humanity. PE

8

technicacuriosa.com / Vol.1 • No.1



Brain-Computer Interface

An

Arduino-Based Brain-Computer Interface

Open-sourcing the World of Neuroscience 10

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Arduino-Based

W

henever you think a thought—and even when you don’t—the more than one hundred billion neurons making up your brain fire through and across an interconnected web of terminals in an elegant electronic interplay. And at just that moment when the sum of input signals into one neuron crosses a certain threshold, that neuron responds by sending an “action potential”—an electrical signal—into the axon hillock (the conductive interface that links adjacent neurons), propagating this potential along to the next neuron, and so on through the network, generating your thoughts,

11

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Arduino-Based

invoking memories, producing speech or the movement of a limb. It’s one of nature’s truly great wonders. Like the electronic circuits that they are, neurons work through a recurrent series of on and off signals that can be observed and measured—and tapped to do work—even beyond our bodies. But also, like the insulation on electrical wire, the brain’s neural signal paths are sheathed by a fatty substance called myelin. While myelin helps to increase the speed of electrical communications between neurons, it also acts as an insulator. Fortunately, it’s not a perfect insulator; some of the electric energy escapes. And it’s just those leaky electrical signals that an electroencephalograph (EEG) is able to pick up. But there’s another rub: getting at those faint signals is a real challenge. At the heart of the matter is the mechanical interface between the scalp and the measuring apparatus. Short of implanting electrodes directly into the brain, we have to make do with somewhat less invasive means. But like the myelin that attenuates electrical signals, the skull does a number on them, too. The good news is that the amount of signal that does come through is sufficient for detecting the tiny differences in the voltage levels between neurons—the on-off switches—and that’s what we’re seeking to capture. At this point, it simply becomes a matter of amplifying and filtering

The OpenBCI GUI, also open-sourced and available for mods. See it in action here. 12

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Arduino-Based

those signals for the purposes at hand. And what a world of purposes it is. Spanning devices that enable the physically disabled to function more independently to biofeedback for health and wellbeing to creating a more immersive gaming environment to manipulating machinery with little more than a thought, the possibilities are limited only by one’s imagination. Where the latter is concerned, just where man ends and machine begins is beginning to become a little less defined. In this sense, the brain-computer interface—BCI—could open up previously unimagined possibilities. And one of the major developments that is enabling those possibilities is the work being done by the dedicated team at OpenBCI.

W

hether your interests lie in developing mind-controlled robots, conducting sleep lab research, or even hacking your Tesla to respond to your thoughts, OpenBCI has created neurofeedback for the masses. What’s more, the company’s versatile and affordable bio-sensing kits are open source: no longer the sole province of expensive clinical machines, you can now sample brainwaves right at your kitchen table. OpenBCI is the outgrowth of a DARPA-funded project to design and build

OpenBCI founders Conor Russomanno (left) and Joel Murphy (right.) 13

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Arduino-Based

a low-cost EEG platform for non-medical users. Enabled by a combination of Arduino and TI processors, the OpenBCI platform provides high-resolution imaging and recording of biopotentials, including electrical brain activity (EEG), muscle activity (EMG), and heart rate (EKG). We sat down with the company’s founder, Joel Murphy, to learn more about the technology and how he and his colleagues developed it. I understand that OpenBCI had its genesis in a DARPA grant. How did that come about? In 2013, we teamed up with Creare, an engineering company in New Hampshire, to respond to an SBIR solicitation—a small business innovation research grant—seeking a low-cost but high-quality EEG system for non-traditional users, that is, someone who is not a neuroscientist or doctor. And our proposal was one of those accepted for participation in Phase I of that grant. One of the stipulations in the solicitation was that it also had to be open source. The goal of the grant was to create tools that were not only powerful, but also flexible, modifiable, and accessible, and therefore available to anyone so we can crowd-source innovations in neuroscience. That led to the development of what became the prototype for the OpenBCI board. At the time of the solicitation, I was teaching at the MFADT graduate program at Parsons, and coincidentally, a student of mine, Conor Russomanno, was doing his thesis project in neurofeedback. We ended up forming the company together and launched a Kickstarter campaign in 2014 to fund it and get this technology out in the wild. We asked for $100K and got $215K in backer support. Now, you don’t have a background in neuroscience, as far as I can tell… No! Not at all. My background is in kinetic sculpture. I’m sort of an autodidact, always trying to learn

14

technicacuriosa.com / Vol.1 • No.1

The OpenBCI board interfaces with the host computer via Bluetooth with either a USB Dongle or the computer’s Bluetooth.

OpenBCI’s batterypowered configuration enables both superior signal integrity and mobility.


Brain-Computer Interface Arduino-Based

new things and build on what I know. Conor went to school for Engineering Mechanics, and when I met him he was studying Design and Technology. We love the idea of the non-expert doing these disruptive things. Yeah, that’s us! About the architecture of the system, what are the key enabling technologies? It was determined early on that we’d use the Texas Instruments ADS1299—essentially an EEG system on a chip. It’s a low-noise, 8-channel, 24-bit analog-to-digital converter for EEG and biopotential measurements. We built the system on an Arduino platform, which gave us a lot of flexibility for the prototyping phase of the project—specifically, Digilent’s Arduino-compatible chipKIT microcontroller board, which integrates Microchip’s PIC32. We also use RFduino from RF Digital—an Arduino-compatible Bluetooth radio module based on Nordic Semiconductor’s radios—to enable communication between the host computer and the board, as well as providing a link to re-program the PIC32 microcontroller. Tell me more about the host connectivity. The OpenBCI board interfaces with your computer via Bluetooth with either a USB Dongle or your computer’s Bluetooth. The dongle also has an integrated RFDuino that communicates with the RFDuino on the OpenBCI board and an FTDI chip that establishes a Virtual COM Port on your computer. And the communications protocol you’re using? We’re using Nordic Semiconductor’s proprietary high-speed stack called the Gazell Link Layer, which is a protocol for setting up a wireless link between the Host and up to eight devices. We’re running that on our RFduinos. BLE is just too slow and bandwidth-limited. For the entry-level board, though, we’re using a straight-up BLE module, called Simblee, also made by RF Digital. We love them. I have to say we learned a lot about architecting communication protocols. But it was really nice that we were able

15

technicacuriosa.com / Vol.1 • No.1

The OpenBCI Ganglion is a high-quality, affordable bio-sensing device. Its four high-impedance differential inputs can be used individually for measuring EMG or ECG, or they can be individually connected to a reference electrode for measuring EEG.


Brain-Computer Interface Arduino-Based

to build everything on Arduino. Everything was easy to interface, and we didn’t have to worry about having a complicated toolchain. Anyone can jump in and start splashing around with the code. What were the drivers behind going wireless? It was important that the system be wireless, primarily for safety and liability reasons. But also, whenever you plug anything in, even if you have good isolation, you can end up getting more powerline noise than you want. A battery-powered board gives us much better signal integrity. What can you tell us about the measurement accuracy and sensitivity you’re able to achieve? There are other commercial EEG devices out there. The problem is that they are closed systems; no one can really verify their signals. But you can with ours, because we’re open source. We expose the entire signal path. For example, the TI ADS chip feeds us the 24-bit analog to digital results of whatever is happening on a particular channel, and we convert that signal into microvolts with a scale factor provided to us by Texas Instruments in their datasheet. So we basically send in a signal of known amplitude into the device, and we get the same signal out, meaning it’s as accurate as it can be. When benchmarked against state-of-the-art, very expensive equipment, we come in looking very good on signal integrity, SNR, etc., but we’re also orders of magnitude below the cost of comparable gear. Speaking of cost, you’ve made reference to an entry-level board. Tell me about the product range, and why would I choose one over the other. We make two electronic amplifier boards, the Ganglion and the Cyton. The Ganglion is a lower-cost 4-channel bio-sensing amplifier that, like the Cyton, can also measure ECG, EMG, and EEG, but with slightly less signal quality. It uses a Simblee BLE radio module to control the board, and data is sampled at 200Hz. It fits the bill for most low-cost research, education, and hacking. Also, if you’re do-

16

technicacuriosa.com / Vol.1 • No.1

The 32bit OpenBCI Board is an Arduino-compatible, 8-channel neural interface with a 32-bit processor. At its core, the 32bit OpenBCI Board implements the PIC32MX250F128B microcontroller. The board comes pre-flashed with the chipKIT™ bootloader, and the OpenBCI firmware. The board can be used to sample brain activity (EEG), muscle activity (EMG), and heart activity (EKG), and communicates wirelessly to a host computer via the OpenBCI programmable USB Dongle or to any mobile device or tablet compatible with Bluetooth Low Energy (BLE).


Brain-Computer Interface Arduino-Based

ing neurofeedback to help with treatment for anxiety or depression, you’re only going to be using at most four inputs, so we wanted to make a lower-cost 4-channel system available. The Cyton has eight sensor inputs—and up to 16 with our expansion board. It samples data at 250Hz. It is, in fact, a research-grade product. The additional channels provide greater spatial resolution, enabling more diverse types of research that require higher channel counts. The scientists will always say, “More data, please!” They want all the signals they can get. Those differences aside, both boards have the ability to measure bioelectric potentials ranging from the tiny EEG signals (1uV to 10uV) to ECG signals (~100uV) and also the EMG signals that your muscles emit (>100uV). You can also process multiple types of signals simultaneously, say, with electrodes on your scalp, chest, and shoulder, for example. Tell us about the expansion module available on the Cyton. We call that the Daisy Module, which is essentially a shield, in Arduino terms. We’re limited to 16 channels right now, due to the wireless connection and the way we package the data and send it over the air. Of course, there are a lot of ways to compress data, so I imagine you could throw another

The OpenBCI Daisy Module (a shield) provides eight additional differential high-gain, lownoise input channels for the Cyton. 17

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Arduino-Based

Daisy Module on top and get 24 channels if you figured out the right compression scheme. We also have a developer/partner, AJ Keller of Push The World, who has designed a Wi-Fi shield for the Cyton and Ganglion devices. This will greatly increase the bandwidth. On most wireless networks, the WiFi Shield can stream well over 1000Hz with the Cyton and Daisy boards and 1600Hz with the Ganglion board—over 10x faster data rates relative to traditional Bluetooth-based EEG devices. With a high-speed network switch, sample rates over 2000Hz are possible with a theoretical limit of 16000Hz for the Cyton, 8000Hz for the Cyton with Daisy shield, and 12800Hz for the Ganglion. The WiFi Shield has an ESP8266 onboard microcontroller and wireless connection. The ESP8266 is Expressif’s first-generation Arduino-compatible IoT Wifi module. The Esp8266 allows for the ability to update software over the air. The WiFi Shield takes two GPIO pins from the Cyton/ Ganglion for proper operation. All of the break-out inputs on the Ganglion and Cyton are available for users to hack with. What’s especially cool about it, though, is that it adds brainwave to cloud functionality, and allows for an infinite distance between the headset and the computer. It used to be limited to two meters max! Speaking of the headset, let’s shift gears here and talk about that component. In addition to the amplifier hardware, we also make a 3D-printed EEG electrode headset called the Ultracortex. The Cyton and Ganglion have input header pins for interfacing with the bio-sensing electrodes in the headset. What are the big problems you’re solving with this? One of the things people always want is ambulatory imaging; they want you to be able to walk around with an EEG on your head. Our wireless architecture enables that. The second issue is that people really don’t want to have to use that messy hair gel that makes for better electrode connections. The good news is that new dry electrode technologies have emerged. We’re using those, but you can use any bio-sensing electrode with our boards. And the headset is 3D-printed?

18

technicacuriosa.com / Vol.1 • No.1

The WiFi Shield is the first opensource device that connects bio-sensors directly to the internet, allowing users to eliminate the latencies of serial ports and low-range Bluetooth connections.


Brain-Computer Interface Arduino-Based

Yes, the idea of the 3D-printed headset originated with this dream of being able to measure your head in various ways and enter those measurements into a web-based parametric engine and have it generate a headset that is a true custom fit. That’s the future; we’re not there yet. But by using 3D printing technology to iterate rapidly on designs of the headset, one size fits most. We also engineered special electrode nodes that screw in and out of the frame, so that when you size it to your head, it’s fixed. That makes it easy; you can literally go from zero to brainwaves in 30 seconds. How is the positioning of the electrodes determined? There’s a standard for that—the 10-20 system. It’s a scientific mapping of the scalp that allows you to replicate someone else’s work, or if you want to target certain areas of the brain, you can do that, and make whatever you’re doing repeatable. What can you tell us about the electrodes you’re using? We have them made for us. They are a copper substrate with a silver/ silver chloride plating on them. Silver/silver chloride [Ag/AgCl] is the electrochemical material of choice for any type of bio-sensing application. The

19

technicacuriosa.com / Vol.1 • No.1

The Ultracortex is an open-source, 3D-printable headset intended to work with any OpenBCI Board. It is capable of recording research-grade brain activity (EEG), muscle activity (EMG), and heart activity (ECG). The Ultracortex Mark IV is capable of sampling up to 16 channels of EEG from up to 35 different 10-20 locations.


Brain-Computer Interface Arduino-Based

reason for that is the electrode is making a connection to your skin, but it’s not just a resistive connection; it has a complex resistance/capacitance characteristic. The signal that you’re getting is invariably going to have some sort of DC offset associated with it. Silver that has chloride mixed into it or coating it gives you the least DC offset. Each of the metals—gold, silver, tin, and so forth—has a certain offset. If you’re going to use a dry electrode, your impedance is going to be higher than if it’s wet, that is, an electrode that makes contact with your scalp through a conductive gel. So you want to be able to maximize every angle you can: you’re just not going to get as good a connection using a dry electrode as you will with a wet electrode. And we can measure impedance with the TI ADS chip on the Cyton. For EEG, you want impedance that is under 10K Ohms. But when you’re using dry electrodes, you can get impedances of 80 to 90K Ohms, which is not ideal. Still, we’re able to obtain legitimate EEG signals with them. But it is a compromise. That said, if you’re doing serious neuroscience research, you’ll probably want to use wet electrodes. Now, one way to get around the limitations of a dry electrode is with an active electrode. These work by placing a unity-gain amplifier right next to the electrode at the scalp; you just turn that op amp into a follower. It’ll take whatever signal that is there and push it down the wire with a lot more force. And it reduces environmental noise in the bargain. This greatly improves the signal quality. We’ll be making active electrodes sometime this year. Lastly, I see that a sizeable community has grown up around OpenBCI. The community is awesome. In fact, it’s our biggest asset. There are a hundred things you can do with EEG. We see people doing work with P300 waves and other projects that are just amazing. Just browse through the community to see what people are doing. I can’t tell you how rewarding it is to have a device like this that opens the door for people to get the powerful tools they need—people who otherwise wouldn’t have access or $60K to drop on a system.

20

technicacuriosa.com / Vol.1 • No.1

The 10–20 system is an internationally recognized method to describe and apply the location of scalp electrodes in the context of an EEG test or experiment.


Brain-Computer Interface Arduino-Based

Ultracortex & OpenBCI GUI Demo by Conor Russomannov. Watch it here.

Build It Yourself OpenBCI is an Open Source Hardware company. That means you can get all of the design files for everything they make: OpenBCI has published all the design and production files for both the Cyton and Ganglion boards, as well as the complete BOM, so you can build one yourself, or buy one fully assembled. ■ All of the firmware for the Cyton and Ganglion is published on github, as well, so you can modify the code to alter the behavior of the board in any way you like. ■ The Ultracortex Electrode Headset 3D print files are available via the OpenBCI github repository. OpenBCI also offers a “Print It Yourself” kit of the non-printed parts, as well as the entire BOM. ■ The OpenBCI GUI is written in Processing, and it is also open source and fully customizable. ■ Quick links for getting started: http://docs.OpenBCI.com/Hardware/02-Cyton http://docs.OpenBCI.com/Getting%20Started/00-Welcome ■

Want to learn more about open source hardware? Visit the Open Source Hardware Association website at www.oshwa.org. PE

21

technicacuriosa.com / Vol.1 • No.1



Brain-Computer Interface

T

he notion of brainwaves being able to exert sufficient energy to influence objects would involve nothing short of a miracle—especially considering that the radiance of the wimpy signals emitted by neural activity don’t extend beyond a few millimeters from the skull. Spoonbenders need not apply. But moving a Tesla with your thoughts? That’s an entirely different matter!

23

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Teslapathic

Perhaps one reason why telekinetic control of your car does not currently appear on any automaker’s roadmap for the autonomous vehicle is the inevitable spike in road rage that would be sure to attend it. Think about it! But still, in the context of showcasing the future of human-computer interfaces, mind-controlled devices hold tremendous promise across an unlimited landscape of possibilities. But let’s get back to the mind-controlled Tesla. Casey Spencer and a few of his friends (Lorenzo Caoile, Vivek Vinodh, and Abenezer Mamo) recently engaged in a 36-hour hackathon hosted by UC Berkeley’s Cal Hacks 3.0. Their project’s objective? Navigate a Tesla into a parking

24

technicacuriosa.com / Vol.1 • No.1

space using brainwave-generated commands. Dubbed Teslapathic, it worked. In short, the solution they created was enabled by the OpenBCI platform and an EEG headset. With the help of a machine learning algorithm, they trained the system to detect the brainwaves associated with thinking the commands “stop” and “go.” The resulting waveforms were then translated into corresponding analog signals that were broadcast by an RC radio, which, via an Arduino-based controller, articulated the actuators on the pedals and a windshield wiper motor affixed to the steering wheel. And voila! A mind-controlled driving experience. Pretty cool. With that brief introduction, we’ll let Casey walk us through the details.


Brain-Computer Interface Teslapathic

T

he system begins with the OpenBCI platform. We positioned the nodes in the Ultracortex headset to focus on the user’s motor cortex. For increased accuracy, as the user thinks “stop” or “go,” he or she also visualizes moving specific body parts. For example, I thought of moving my right foot for “go” and clenching my left hand for “stop.” Associating muscle movement with the intended result yields more easily detected signal differentiation and also tends to increase mental focus. A signal profile of the user’s thoughts is then created by running training software to determine averages for when the user thinks the commands. We measured the signal from each node numerically then averaged them, using k-nearest neighbors signal processing to account for any outliers. Those averages acted as references for the inference engine when running. Upon execution of the deep

25

technicacuriosa.com / Vol.1 • No.1

Mind-Controlled Tesla at CalHacks. Watch it in action here. learning inference engine running on a laptop, this profile serves as the reference for determining an outcome. Despite only having to determine one of two possible outcomes, in the event of an inconclusive result, the inference engine will default to a “stop” result for safety purposes. After determining a result within the set confidence threshold, a simple “1” or “0” is passed to a connected Arduino. The wires were connected to pin 10 on the Arduino Mega 2560 for PWM and GND for—you guessed it—ground. We used the Arduino Mega simply because we had one on hand. RC PWM channel 1 was brake, channel 2 was accelerator, which is in turn tapped into the trainer port of an off-the-shelf Futaba T9CHP RC radio. In short, we ex-


Brain-Computer Interface Teslapathic

ploited the radio’s trainer feature to allow for communication between the OpenBCI and the driving hardware. Most hobbyist RC radios have what is called a “trainer port.” A student radio (or “slave” radio) is plugged into a teacher’s “master” radio through this port. This allows for a teacher to relinquish selective control of the RC vehicle to a student as long as a corresponding switch is held open, which makes training novice RC pilots safer and easier. For example, when teaching someone to fly an RC plane, the teacher would control takeoff and bring the plane to a level heading. From there,

26

technicacuriosa.com / Vol.1 • No.1

the teacher would hold open the trainer switch and relinquish control of the plane to the student. Were the student to lose control of the plane, the teacher would release the trainer switch, regain input control, and recover the flight. In our case the “slave” was an Arduino communicating results from the inference engine. “Go” means decreasing the signal length in CH1 and increasing the length for CH2, which was relayed by the radio to the receiver, then the receiver to the motor controllers, resulting in the brake actuator receding and the accelerator actuator extending. The trainer switch also made for an excellent dead man’s switch safety feature; if anything went wrong, I could release the switch and regain manual control of the actuators. By having an Arduino mimic the PPM (pulse position modulation) timings sent by a slave radio, the T9CHP effectively becomes an analog pass-through and delivery method. When I first attempted this Arduino-to-radio interface I had to use an oscilloscope to find the right timings. Prior attempts were basically tantamount to looking for a needle in a haystack blindfolded. In retrospect, I probably could have just had the Arduino listen to the signal and record the


Brain-Computer Interface Teslapathic

timings, but oscilloscopes are much more fun! The PPM signal is manipulated in accordance with the user’s intent, e.g., stop or go, which results in articulation of the driving hardware. PPM is a translation of an analog signal based on signal timing length. Essentially, when a signal is received, the PPM system translates signal positions into timing lengths, then carries out commands depending on how those signals correspond to their timing thresholds. The head tracking gyro (an ImmersionRC Trackr2), which enables left-right movement, is spliced independently into the signal as well, inline between the Arduino and the radio. The RC receiver relays the gyro movement information to the wiper motor mounted to the steering

wheel, and to the command positions to the linear actuators on the pedals. This was accomplished rather crudely: wood planks were placed at the base of the driver seat, and the actuators attached to the planks with 3M heavy-duty mounting tape. The actuators were then affixed to the pedals with industrial cable ties. The tension between the pedals and wood planks was enough to support the actuators, and when extended, the actuators would anchor the planks against the seat. Admittedly a fairly delicate balance, but that’s by design—in an absolute worstcase scenario I could have kicked away the actuators from the seat base and pressed the brake myself. We knew we’d be going very slowly over smooth road, so we weren’t worried about anything being jostled loose. A windshield wiper motor fitted with a potentiometer was mounted to the steering wheel. “Go”—in the form of the corresponding analog signal—results in the brake actuator receding and the accelerator actuator engaging; “stop” results in the opposite. Left and right movement from the head-mounted gyro results in left and right movement at the wheel. Considering that communication with the hardware in the car is done through a Continued on page 29

27

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Teslapathic

Teslapathic Block Diagram 28

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Teslapathic

wireless radio; technically, no one has to be inside the car while it’s being controlled. Still, we implemented multiple safety measures: an emergency brake in the Arduino portion of the code in case of failure; the requirement for the user to be holding a dead man’s switch in order for the signal to broadcast; a physical block wedged behind the accelerator pedal to prevent it from going too fast; allowing the user

to take manual control through the radio at any time; and, if all else fails, the actuators were pressure fit so the user could simply kick them away from the pedals. For the machine learning portion of the system, we used Scikit-Learn—a machine learning platform in Python that runs on a laptop. The challenge was in training our machine learning algorithms to clearly interpret the “go” and “stop” signals. This took a lot of refinement, but we managed to achieve a high degree of accuracy. In the end, we were able to take a very complex idea and break it down into smaller parts that we could implement asynchronously. Most of all, we learned a great deal during this 36-hour journey. PE

About Casey Spencer Casey personifies the phrase “jack-of-all-trades” and has been referred to as a modern-day Renaissance man. Since teaching himself how to build drones in 2011, he’s tried to learn from as many varied schools of thought as possible. His personal projects have ranged from mind-controlled drones and golf carts to 3D volumetric projectors, built while earning degrees in psychology and English. His current work is focused on introducing young students to the psychological concepts behind virtual reality, where he hopes to demystify and encourage the pursuit of science. He is also seeking interesting opportunities or internships. Connect with him via LinkedIn (linkedin.com/in/casey-spencer) or Twitter (@Casey_S85D). Learn more about the brain-computer interface here.

29

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface The idea of controlling robotic operations with nothing more than brainwaves is the ultimate expression of the man-machine interface. The introduction of machine learning to detect actionable brainwave signatures—and translate them into commands a robot or other actuator can comprehend—opens up new and potentially unlimited vistas into possibilities for revolutionizing everything from the factory floor to the kitchen to the self-driving car. A simple experiment zeros in on one such brain signal: the error-related potential, and demonstrates its use in a basic robotic task.

30

technicacuriosa.com / Vol.1 • No.1

Brainwaves Correct Robot Mistakes in Real Time... With a Little Help From Machine Learning


Brain-Computer Interface Correcting Robot Mistakes

R

esearchers at Boston University’s Guenther Lab and MIT’s Distributed Robotics Lab teamed up to investigate how brain

Figure 1: The robot is informed that its initial motion was incorrect based upon real-time decoding of the observer’s EEG signals, and it corrects its selection accordingly to properly sort an object, in this case, a can of paint or a spool of wire.

31

signals might be used to control robots, pushing the possibilities of human-robot interaction in more seamless ways.

While capturing brain signals is difficult, one signal in particular bears a strong signature, and is thus a bit more accessible to headset-based (non-invasive) electrode detection. The error-related potential (ErrP) signal is generated reflexively by the brain when it perceives a mistake, including mistakes made by someone else. What if this signal could be exploited as a human-robot control mechanism? If so, then humans would be able to supervise ro-

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Correcting Robot Mistakes

Amplitude [μV]

Figure 2: Error-related potentials exhibit a characteristic shape across subjects and include a short negative peak, a short positive peak, and a longer negative tail.

bots and immediately communicate, for example, a “stop” command when the robot commits an error. No need to type a command or push a button. In fact, no overtly conscious action would be required at all—the brain’s automatic and naturally occurring ErrP does all the work. The advantages of such an approach include minimizing user training and easing cognitive load. To these ends, the team’s work explored the application of EEG-measured error-related potentials to real-time closed-loop robotic tasks, using a feedback system enabled by the detection of these signals. The setup involved a Rethink Robotics Baxter robot that performs a simple object-sorting task under human supervision. The human operator’s EEG signals are captured and decoded in real time. When an ErrP signal is detected, the robot’s action is automatically corrected. Thus, the error-related potentials are shown to be highly interactive. Another attractive aspect of the ErrP is how easily it is generalized

Time from Feedback Onset [s] 32

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Correcting Robot Mistakes

Figure 3: The experiment implements a binary reaching task; a human operator mentally judges whether the robot chooses the correct target, and online EEG classification is performed to immediately correct the robot if a mistake is made. The system includes a main experiment controller, the Baxter robot, and an EEG acquisition and classification system. An Arduino relays messages between the controller and the EEG system. A mechanical contact switch detects arm motion initiation.

33

technicacuriosa.com / Vol.1 • No.1

for any application that could be controlled by it, again, without requiring extensive training or active thought modulation by the human. This is because ErrPs are generated when a human consciously or unconsciously recognizes that an error—any error—has been committed. It turns out that these signals may actually be integral to the brain’s trial-and-error learning process. What’s more, these ErrPs exhibit a definitive profile, even across users with no prior training, and they can be detected within 500 ms of an error detection. So how does it work? In short, Baxter randomly


Brain-Computer Interface Correcting Robot Mistakes

selects a target and performs a two-stage reaching movement. The first stage is a lateral movement that conveys Baxter’s intended target and releases a pushbutton switch to initiate the EEG classification system. The human mentally judges whether this choice is correct. If so, the system informs Baxter to continue toward the intended target. If, on the other hand, an ErrP is detected, Baxter is instructed to switch to the other target. The second stage of the reaching motion is then a forward reach toward the object. If a misclassification occurs, a secondary ErrP may be generated, since the robot did not obey the human supervisor’s feedback.

Robot Control and EEG Acquisition Figure 4: Various preprocessing and classification stages identify ErrPs in a buffer of EEG data. This decision immediately affects robot behavior, which affects EEG signals and closes the feedback loop.

34

The control and classification system for the experiment is divided into four major subsystems, which interact with each other as shown in Figure 3. The experiment controller, written in Python, oversees the experiment and implements the chosen paradigm. For example, it decides the correct target for each trial, tells Baxter where to reach, and coordinates all event timing. The Baxter robot communicates directly with the experiment controller via the Robot Operating System (ROS). The controller provides joint angle trajectories for Baxter’s left 7 degree-of-freedom arm in order to indicate an object choice to the human observer and to complete a reaching motion once EEG classification finishes. The controller

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Correcting Robot Mistakes

also projects images onto Baxter’s screen, normally showing a neutral face but switching to an embarrassed face upon detection of an ErrP. The EEG system acquires real-time EEG signals from the human operator via 48 passive electrodes, located according to the extended 10-20 international system and sampled at 256 Hz using the g.USBamp EEG system. A dedicated computer uses Matlab and Simulink to capture, process, and classify these signals. The system outputs a single bit on each trial that indicates whether a primary error-related potential was detected after the initial movement made by Baxter’s arm. The EEG/robot interface uses an Arduino Uno that controls the indicator LEDs and forwards messages from the experiment controller to the EEG system. It sends status codes to the EEG system using seven pins of an 8-bit parallel port connected to extra channels of the acquisition amplifier. A pushbutton switch is connected directly to the 8th bit of the port to inform the EEG system of robot motion. The EEG system uses a single 9th bit to send ErrP detections to the Arduino. The Arduino communicates with the experiment controller via USB serial. Experiment codes that describe events such as stimulus onset and robot motion are sent from the experiment controller to the Arduino, which forwards these 7-bit codes to the EEG system by setting the

Want to see it in action? Check out the video.

35

technicacuriosa.com / Vol.1 • No.1


Brain-Computer Interface Correcting Robot Mistakes

appropriate bits of the parallel port. All bits of the port are set simultaneously using low-level Arduino port manipulation to avoid synchronization issues during setting and reading the pins. Codes are held on the port for 50 ms before the lines are reset. The EEG system thus learns about the experiment status and timing via these codes, and uses this information for classification. In turn, it outputs a single bit to the Arduino to indicate whether an error-related potential is detected. This bit triggers an interrupt on the Arduino, which then informs the experiment controller so that Baxter’s trajectory can be corrected. The experiment controller listens for this signal throughout the entirety of Baxter’s reaching motion.

Training the Neural Network to Detect ErrP Signals The classification pipeline and feedback control loop are illustrated in Figure 4. The robot triggers the pipeline by moving its arm to indicate object selection; this moment is the feedback onset time. A window of EEG data is then collected and passed through various preprocessing and classification stages. The result is a determination of whether an ErrP signal is present, and thus whether Baxter committed an error. The implemented online system uses this pipeline to detect a primary error in response to Baxter’s initial movement; offline analysis also indicates that a similar pipeline can be applied to secondary errors to boost performance. This optimized pipeline achieves online decoding of ErrP signals and thereby enables closed-loop robotic control. A single block of 50 closedloop trials is used to train the classifier, after which the subject immediately begins controlling Baxter without prior EEG experience. The classification pipeline used to analyze the data has a preprocessing step where the raw signals are filtered and features are extracted. This is followed by a classification step where a learned classifier is applied to the processed EEG signals, yielding linear regression values. These regression values are subjected to a threshold, which was learned offline from the training data. The resulting binary decision is used to control the final reach of the Baxter robot. PE

To learn more, see the full text article here, by Andres F. Salazar-Gomez, Joseph DelPreto, Stephanie Gil, Frank H. Guenther, and Daniela Rus.

36

technicacuriosa.com / Vol.1 • No.1



Energy Harvesting

EnergyHarvesting Circuits for Capturing and Harnessing Ambient Power By Maurizio Di Paolo Emilio

38

technicacuriosa.com / Vol.1 • No.1

A World Of

FREE Energy


Energy Harvesting Free Energy

E

nergy harvesting is the process of collecting energy, most of which is otherwise wasted, from an array of environmental sources, both natural and artificial. These myriad forms of energy include any of those sources through which we can extract electrical current to power any number of low-power devices. The ability to recover even a fraction of this energy could have significant economic and environmental impact. To these ends, an energy harvesting system consists of many microelectronic solutions with the aim of satisfying the fundamental load requirements. The process, though, is not without its challenges. For example, while commonly used energy transducers such as piezoelectric, thermal, or RF provide variable output voltages, they often do not meet the load requirements of a great many applications. Consider also solar energy, which is highly variable, day and night. Thus, it does not lend itself to powering electronic circuits such as microcontrollers that require a continuous and constant voltage level, e.g., 5V or 3.3 V, to work. And while DC-DC converters provide a stable supply voltage, the input is typically not stable due to fluctuations inherent in the energy sources themselves. Adding to the challenge is the need for absorption requirements to be as low as possible. So what are our options? That’s the question we’ll explore as we survey a variety of energy sources and circuits for harvesting their energy.

E

nvironmental energy occurs naturally at both large and micro-scales. In microelectronic systems, micro-scale energies involve such sources as electromagnetic waves and solar energy. While the overall efficiency of these systems remains remarkably low (often less than 30%), they’re still useful. Fossil fuels, on the other hand, are limited, expensive, and not at all environmentally friendly. Anticipating such a dilemma, Nikola Tesla himself famously mused over the question of capturing “all energies of the Earth,” which he addressed in part with his radiant energy collector. Batteries, of course, power a great many current technologies. But they, too, present certain challenges. With

39

technicacuriosa.com / Vol.1 • No.1

In 1901 Nikola Tesla was granted several patents for a system of radiant energy transmission and reception. The system utilized “natural media” such as the atmosphere and ground to transmit power with virtually no losses.


Energy Harvesting Free Energy

the advent of the Internet of Things, many sensors are deployed in difficult environments where battery replacement or recharging is a task that requires special (additional and potentially costly) features. In this context, the possibility of implementing an external source of power to charge the battery would bring welcome efficiencies. The possibility of avoiding battery replacement at all is even more attractive, especially for wireless sensor networks with high maintenance costs (Figure 1).

Energy Sources In short, the objective of an energy harvesting system is to convert ambient energy into electricity. There are many scientific techniques that can be used to capture the energy that is currently lost or dissipated in the form of heat, light, sound, vibration, or movement. Examples include the piezoelectric effect, which converts mechanical strain from motion, weight, vibration, and temperature changes into voltage or electric current. The pyroelectric effect converts a temperature change into electric current. And there are emergent metamaterial-based devices that are able to convert a microwave signal into electrical current. In addition to the relative conversion via transducer, an electronic circuit is needed to manage the energy collected by the power management circuits (Figure 2 and 3). The electrical energy obtained is very small (from about 1 W / cm3 to 100 mW / cm3), and therefore has a working point only in low power modes

40

technicacuriosa.com / Vol.1 • No.1

Figure 1. A block diagram of an energy harvesting system.


Energy Harvesting Free Energy

––––––––––––––––––––––––––––––– Energy Category Harvested Power ––––––––––––––––––––––––––––––– Human Vibration/Motion 4μW/cm2 ––––––––––––––––––––––––––––––– 100μW/cm2 Industry Vibration/Motion ––––––––––––––––––––––––––––––– Human Temperature 25μW/cm2 ––––––––––––––––––––––––––––––– 1-10μW/cm2 Industry Temperature ––––––––––––––––––––––––––––––– 10μW/cm2 Indoor Light ––––––––––––––––––––––––––––––– 10mW/cm2 Outdoor Light ––––––––––––––––––––––––––––––– GSM/3G/4G RF 0.1μW/cm2 ––––––––––––––––––––––––––––––– 1μW/cm2 Wi-Fi RF –––––––––––––––––––––––––––––––

(solar energy can have a working point in a high-scale system). A sensor, a microcontroller, and a wireless transceiver constitute a typical electronic load. The energy consumption is on the order of nA to uA for the first two components, and a few mA for the transceivers.

Ambient Energy

The primary sources of “free” energy are solar, mechanical, thermal, and electromagnetic. Let’s take them in turn. Light is a source of environmental energy available for both low- and high-power electronic devices; a photovoltaic system generates electricity by converting solar energy.

41

technicacuriosa.com / Vol.1 • No.1

Figure 2. Energy values for several categories of energy harvesting.

Figure 3. Sources of energy harvesting.


Energy Harvesting Free Energy

The sources of mechanical energy include vibrating structures or moving objects. Vibration is the source of energy for mechanical transducers, and is characterized by two parameters: acceleration and frequency. Acceleration is directly proportional to its displacement from its equilibrium position, whereas frequency derives from oscillations measured in hertz, and is the number of oscillations per second from its equilibrium position. Frequency analysis allows us to evaluate various frequencies and the damping effects as oscillations in a system decay after a disturbance. Many systems exhibit oscillatory behavior when they are disturbed from their position of static equilibrium—the behavior of a mass attached to a spring comes to mind. Likewise, one or more transducers of movement measure output vibration. For example, the vibrations of industrial machines have accelerations between 60 and 125 Hz. A piezoelectric sensor can convert this acceleration into electrical current. Even human walking is an energy source for the production of electrical signals. The movement of fingers can actually generate a few mW; limb movement, about 10 mW; exhalation and inhalation, about 100 mW; and walking can produce several Watts! Thermal energy exploits the Seebeck effect—the direct conversion of temperature differences to electric voltage to generate a potential difference— yielding the thermoelectric effect. Thin film thermoelectric elements provide an ideal starting point for efficient conversion to useful battery-like operating voltages. Thermoelectric generators represent the most common application of the Seebeck effect, and could be used in power plants to convert waste heat into electrical current, and also in automobiles to increase fuel efficiency. Finally, electromagnetic energy is sourced both naturally and artificially in the atmosphere. RF energy, comprising any of the electromagnetic wave frequencies that span around 3 KHz to 300 GHz, and including those frequencies used for communications, is the power source for mobile phones, TV, Antenna and wireless routers. The ability to collect RF energy enables the wireless charging of

Figure 4. RF energy harvesting.

Output 42

technicacuriosa.com / Vol.1 • No.1


Energy Harvesting Free Energy

low-power devices, while also setting battery use limits. Speaking of batteries, devices without a battery can be designed to operate only at certain time intervals or when a supercapacitor or other energy storage devices, such as thin film batteries, have sufficient charge to activate the electronic circuit (Figure 4).

An Energy Harvesting System Conditioning circuits improve the quality of the power that is delivered to an electrical load, and play an essential role in an energy harvesting system through the optimization of various parameters such as input impedance and filtering. The power limit of a system is greatly reduced with conditioning circuits, which operate at low power levels, reducing losses and increasing the efficiency of the collection system. The purpose of the conditioning circuit is to avoid large-scale solutions by using a storage system for providing a correspondence between the temporal profiles of the power demand from the load source. The harvesting system can be modeled by a combination of linear and nonlinear circuit elements. The maximum transmissible power is when the load is the complex conjugate of the output impedance. The process for the control of the output voltage is to keep constant the value as a function of load variations. Switching converters are power converters that allow the maintenance of the output power without further losses. In a conditioning system with passive circuits, such as the rectifier, the problems are concentrated on the input impedance: a zero voltage on the storage system is presented as a short circuit. The problem is exacerbated if the capacitor has a relatively large capacity and may require many minutes to charge. Additionally, it is necessary that in the starting phase the load does not require a lot of energy.

Figure 5. Power savings and efficiency between LDO and switching regulators.

43

technicacuriosa.com / Vol.1 • No.1


Energy Harvesting Free Energy

In energy harvesting applications, DC-DC converters play an essential role, providing various output voltages with the aim of providing a stable configuration to the voltage level for the electrical loads. Switching regulators provide an alternative to linear regulators (LDO) by generating a constant and independent output voltage from the input voltage and the output current (Figure 5).

Artificial Energy Sources All electronic systems waste energy. Your cellphone is no exception. So why not charge your phone by exploiting electromagnetic waves—especially when their intensity is at its highest during calls and when receiving data? Better yet, why not detect and capture the energy that the universe sends to us in the form of cosmic rays to power wearable systems? There are a great many energy sources, many of which have emerged from human action and as a by-product of industrial and technological development. These “artificial” energy sources are naturals for energy harvesting via the vibration or temperature gradients that are produced by machines and engines. The appeal of collecting RF energy lies in the fact that it is free (Figure 4). Considering how many wireless access points one can find in a city center, imagine all the available RF sources that we can use to recharge, for example, our smartphones. Moreover, RF energy is currently used as a cornerstone of transmitters around the world. The rectenna is the transducer of an RF energy harvesting system, and is therefore able to convert the incident RF signal to the antenna in a DC signal to be amplified and accumulated in the power management circuit. In order to have the maximum power transfer to the load, the impedance of the antenna is the conjugated unit of the load impedance (conjugate

Rrad +

Voc

~

-

44

technicacuriosa.com / Vol.1 • No.1

CL 1:k

RL

Figure 6: Transformer matching network. The antenna is modeled with a voltage generator Voc with its radiation resistance Rrad, while the load is schematized with a CL capability and a RL parallel resistance.


Energy Harvesting Free Energy

adaptation). In the case of conjugate adaptation, therefore, the reactance parts are canceled, and we have purely resistive impedances. Adaptation of impedance is generally achieved by reactive components (inductances and capacities) that, ideally, do not dissipate power. Different matching network configurations are possible, the most common of which are transformer, shunt inductance, and LC network (Figure 6). The rectifier circuits provide a DC output voltage to the corresponding load, which, in the case of a radio frequency energy harvesting system, is the input impedance of the DC-DC converter (Figure 7). Low-pass filters are generally used to avoid the harmonics generated by the non-linearity of the rectifier circuit flowing back to the antenna since such harmonics can reduce the peak amplitude of output voltage and, consequently, reduce efficiency. The use of these filters, however, requires two matching circuits: one between the antenna and the filter, the other between the filter and the rectifier, thus increasing the complexity of the circuit.

~

IN

Power Management Circuits

~

Power management circuits for an energy harvesting system are designed to enable the conversion and accumulation of electromagnetic energy recovered from a receiving antenna. In this way, it is possible to power up an electronic device, freeing it from a power distribution network or even reliance on batteries. Not only is this ultimately economical, it enables the possibility of free power without any maintenance required. A DC-DC converter is a circuit that converts a non-regulated continuous voltage level to another, set at a higher or lower continuous voltage value. Linear converters have the advantage of being easy to implement, but they have low output and dissipate much power, and therefore can reach significant temperatures. They are applied to devices where the powers in play are not high and the cost of implementation is low. Switching converters, on the other hand, store energy temporarily and then release it to output at a

45

technicacuriosa.com / Vol.1 • No.1

Figure 7: Diode bridge rectifier. This rectifier, also known as a full wave rectifier because it rectifies both the positive and negative half-wave of the input signal, allows an average value of the higher output voltage and a lower ripple than the single diode scheme.

+ OUT

-


Energy Harvesting Free Energy

different voltage level. These converters have much higher yields and lower power dissipations (Figure 8). A storage system in supercapacitors must be suitably dimensioned in such a way that it has long life and high efficiency. To this point, the parameter known as Depth of Discharge (DoD) expresses how much energy has been withdrawn from the supercapacitor as a percentage of total capacity. Typically, the DoD of a supercapacitor should not be more than 20% in order to obtain an optimal operation. Supercapacitors may be classified according to the material used for the construction of the electrodes (carbon, metal oxides, or polymers) and according to the type of electrolyte (organic, aqueous, or solid); the choice of the type of electrolyte will then condition the dimensions of the supercapacitor.

PWM L Vi

D

Figure 8: Buck converter electric diagram. This type of converter converts the input voltage Vi to a lower output voltage Vs.

Vo C

RL

Supercapacitors Recent studies in the field of supercapacitors (SC), also known as ultracapacitors or electrochemical capacitors, have developed excellent alternatives for energy storage. Supercapacitors have a greater number of discharge cycles compared to conventional batteries and can operate across a wide temperature range. The disadvantages reside in the low energy density and high self-discharge. In general, to maximize the efficiency of energy storage, the total leakage power is kept low. To maximize energy efficiency on the load, the residual energy is minimized by setting a series configuration with more supercapacitors to increase the voltage above the minimum of the DC-DC converter input voltage. Useful life is the main advantage of this technology with more than 10,000

46

technicacuriosa.com / Vol.1 • No.1


Energy Harvesting Free Energy

Separator Separator

cycles—far superior to the best batteries. In comparison with the size dimension, the maximum deliverable power is higher than that provided by current batteries, and this makes them well suited to withstanding the demands of modern technology. It’s for this reason that they are Electode Electrolyte Electode Electode Electrolyte Electode electric ionic electric widely used in hybrid syselectric ionic electric conductivity conductivity conductivity conductivity conductivity conductivity tems. In supercapacitors, the dielectric is achieved by an organic liquid electrolyte absorbed in the separation zone between the plates, whose limit is the tension bearable between 3 V and 6 V (Figure 9).

Figure 9. Internal structure of a supercapacitor.

Low-power Medical Devices The objective here is to obtain an efficient energy harvesting by reducing the size of the battery. The classic example is the pacemaker (Figure 10). A Figure 10. vibrational energy harvesting system, for example, would allow the reducGeneral structure tion in the size of the pacemaker and also extend the battery life. Currently, of a pacemaker. the size of a typical pacemaker is about 40 mm x 50 mm x 5 mm; the battery consumes about 60% of this area! A second objective, then, is to reduce the area, and then set the maximum size of the energy 10-70μW harvesting techniques to, say, 25 mm x 25 mm x 5 mm, with power consumption on the order of 10uW. The batteries and the pacemaker circuits are encapsulated in a titanium case, and a biocompatible material with a sealed casing 36º C ensures that there is 37 º C no contact between Core of body Surface of skin the inside of the body and the pacemaker batteries or circuits. In ∆T = 1º C

47

technicacuriosa.com / Vol.1 • No.1


Energy Harvesting Free Energy

Wireless

addition, a small fraction of the human body heat flux can be used in the energy harvesting. The heat flux can be converted into electrical energy by a thermoelectric generator (TEG). In summary, energy harvesting makes use of ambient energy to power small electronic devices such as wireless sensors, microcontrollers, and displays (Figures 11 and 12). Typical examples of these free sources are sunlight and any artificial source such as a vibration from heat engines or the human body. Energy transducers such as solar cells, thermogenerators, and piezoelectrics convert this energy into electricity. The goal of any energy harvesting system is to replace the batteries by extending the charging interval for the storage element. A first field of application is the automation market with self-powered switches. Further applications include monitoring systems for large industrial plants or structural monitoring of smart buildings. Another promising market is the consumer area, where clothing that integrates energy transducers in the form of solar cells or TEG or RF transmitters can recharge consumer products such as mobile phones or audio players. Piezoelectric transducers generate electricity, which makes them ideal candidates for detecting the noise of the motor bearings or vibration in aircraft wings. Photovoltaic cells are the source of energy harvesting more widely used, but again, they are not very efficient. The best micro-crystalline solar cells—with a maximum theoretical efficiency of 30%—work well with a maximum efficiency of 20%. The technology be-

48

technicacuriosa.com / Vol.1 • No.1

Figure 11. Block diagram of a WSN (wireless sensor network).


Energy Harvesting Free Energy

Start-up circuit

Load circuit

P gen

P in

P out

P store

P load

P quiet

V DD = 1.5 V

Control circuit

Figure 12. Internal general circuit of an energy harvesting system.

hind energy harvesting is possible, thanks to careful analysis and design of power management factors that simultaneously reduce the levels of power consumption of electronic systems. PE

Maurizio Di Paolo Emilio, PH.D. Maurizio Di Paolo Emilio, Ph.D., Physics, is a telecommunication engineer who has worked on international projects in the field of gravitational wave research. Working as a software/hardware developer in data acquisition systems, he participated as the designer of the thermal compensation system (TCS) for the optical system used in the Virgo/Ligo Experiment (an experiment for detection of the gravitational wave). Di Paolo Emilio is the author of numerous technical and scientific publications in electronic design, physics, PCB, IT, and embedded systems. His book, Microelectronic Circuit Design for Energy Harvesting Systems, provided the basis for this article. This book describes the design of microelectronic circuits for energy harvesting, broadband energy conversion, and new methods and technologies for energy conversion. Dr. Emilio also discusses the design of power management circuits and the implementation of voltage regulators. Coverage includes advanced methods in low- and high-power electronics, as well as principles of micro-scale design based on piezoelectric, electromagnetic, and thermoelectric technologies with control and conditioning circuit design.

49

technicacuriosa.com / Vol.1 • No.1

BUY


Energy Harvesting

SolePower Unplugged, battery-free, and powered exclusively by human motion

S

olePower creates self-charging wearables that capture wasted energy from human motion. SolePower smart work boots—SmartBoots— collect motion and location data, providing industrial workforces with actionable insights to know what is happening on a worksite.

Every time a user steps down, the power generated charges a suite of sensors. Data is sent to a database in the cloud where it is presented on a simple UI and dashboard used to improve workforce safety and productivity.

50

technicacuriosa.com / Vol.1 • No.1


Energy Harvesting SolePower

SolePower founders Hahna Alexander and Cindy Kerr are both featured in Forbes 30 under 30 in Energy, and recognized by the White House for creating technology with a global impact. What follows is their walkthrough of the technology behind the product.

T

he goal of SolePower SmartBoots is to increase job site visibility, enable workflow efficiencies, and reduce the risk of hazards. At the core is an Army-tested, patented kinetic charger that generates power with every step. We are partnering with safety boot manufacturers and embedding the kinetic charger and electronics into the boots. Applications including industrial, firefighting, and military can be addressed with a similar hardware platform. The industrial SmartBoot helps workers be safer and more efficient by collecting location and motion data and sending it to the cloud to be analyzed and presented. The charger will also be used to charge lights to help firefighters see colleagues in smoke and fire conditions, reducing anxiety and overall oxygen consumption. An energy harvester is desired to avoid complex embedding of temperature-sensitive batteries. The same technology will be used in military boots with increased focus on providing reliable, power-independent physiological and location monitoring.

Operating principles The key innovation of the SmartBoots is a kinetic charger that uses compression in a step to spin an embedded generator. The charger generates power using the heel strike in each step. Unlike other kinetic chargers, the shoe is the only charging location on the body that capitalizes on gravity. The charger is 100x more powerful than piezoelectric (materials-based) technology. The kinetic chargers are paired with a variety of sensors and embedded within durable work boots. The sensors collect and send data to the cloud and become “smart.” SmartBoots open up a

51

technicacuriosa.com / Vol.1 • No.1

Lighting Messages, Illumination User Data Falls, Fatigue Sensing Temperature, Voltage Location GPS, RFID Communication 2-Way Coms, Alerts, Cloud Access, S.O.S. Kinetic Charger


Energy Harvesting SolePower

host of solutions, such as being able to measure the location of workers and environmental conditions on a site. The insights from this data can ensure employees are accounted for in an emergency, and detect falls and accidents quickly.

The underlying/enabling technologies The foot-based kinetic charger uses mechanical systems with gears. The downward linear motion in a step is converted to rotational motion to spin a small generator. The mechanical system is a permanent magnet generator. The charger is embedded in the heel of a work boot. The entire mechanism is smaller than a deck of cards, but it can take 3x a user’s body weight with every step, and spins over 5,000 RPM. It also must last the full lifetime of a work boot, which equates to millions of cycles. It’s a mechanical engineering challenge. Fortunately we had the help of a U.S. Army R&D division to support development over multiple iterations.

A closer look at the electronics SolePower is designing the system with power consumption as the key constraint. Many industrial wearables are limited by battery life and associated ease-of-use issues. Our objective is to develop a system that’s fully self-sustained. During every compression, the kinetic charger generates enough power to sense and communicate all data collected about the user. The charger currently outputs between

5

Voltage (V)

4

3

2

1

0 13

13.5

14

14.6

15

15.5

Time (s)

52

technicacuriosa.com / Vol.1 • No.1

16

16.5

17

17.5

18


Energy Harvesting SolePower

0.1 and 0.2 Joules per step (roughly 100mW if the user is walking at ~3mph). The figure on the previous page shows an example of the power output. All of the components are selected with power consumption (and size) in mind, starting with the microcontroller. We’re using a low-power ARM Cortex M0+ MCU (STM32L073RZ). The series offers high efficiency for a comparatively wide performance range. It has ample peripheral options. It consumes a mere 93 μA/MHz in run mode and consumes down to 4.5μA in low-power sleep mode (with a variety of other low power modes), and can be configured to run at a clock speed of up to 32MHz. We also have thermocouples for monitoring body temperature (as a health metric) and outside temperature (certain job sites have max temp regulations). We have a 9-axis inertial measurement unit for motion sensing (LSM9DS1) which can be used for everything from dead-reckoning to fall, slip, and trauma detection. There’s also an altimeter, and a port left open for an insole-based pressure pad. We’re using an OriginGPS module, which is extremely low power with industry-leading time-to-first-fix performance, and has <1.5m accuracy, which is sufficient for our initial applications. The system can be configured to communicate data over Wi-Fi or sub-GHz radio (868MHz or 915MHz modules are available). Unfortunately RF signal doesn’t like human tissue and other materials found in industrial environments, which complicates some of our manufacturing/ assembly and infrastructure choices. BLE is also possible in the future, but is not currently on our platform. Sensor data is presented to end users on We’re implementing a range of sensors a dashboard. The simple user interface is because we’ve heard a wide range of needs designed for use by industrial workers in from potential customers and current partners. complex, dynamic environments with a focus Our goal is to incorporate the functionality of on quick access to information that improves a variety of other stand-alone wearables into job site productivity and safety. The heat map a single and compact, self-charged platform. data is real, taken from a crew working on a PE new building on Carnegie Mellon’s campus. Learn more at www.solepowertech.com.

53

technicacuriosa.com / Vol.1 • No.1



Wi-Fi

BY JOHN SCHROETER

Thanks to

HitchHike, a low-power tag reflects exist-

ing 802.11b transmissions from a commodity Wi-Fi transmitter, and the backscattered signals can then be decoded as a standard Wi-Fi packet by a commodity 802.11b receiver.

W

i-Fi is well-known to be power-hungry. Commodity Wi-Fi radios consume several hundreds of milliwatts of power. Due to its high power consumption, it is hard to continuously use Wi-Fi for internet connection. One research group at Stanford University set out to address this problem by making Wi-Fi transmission not only power-efficient, but extremely so. The new radio they designed, called HitchHike, 55

technicacuriosa.com / Vol.1 • No.1


Wi-Fi Wi-Fi Without Batteries

consumes a mere 33 microwatts—10,000x lower than existing Wi-Fis. Such “Wi-Fi-ish” radio enables Wi-Fi transmission on a battery-free device, and therefore has the potential to benefit millions (if not billions) of devices across the Internet of Things. In fact, HitchHike’s power could be sourced by energy harvesting alone. HitchHike achieves such low-power Wi-Fi transmission by leveraging a technology called backscatter. The key idea of backscatter is signal reflection. When a device wants to transmit data over the air, instead of directly generating the wireless signal, a device actually reflects a wireless signal that is produced by another entity. Since signal reflection consumes three to four orders magnitude less power compared to direct wireless signal transmission, backscatter enables data transmission at extremely low power. This technology has been used in commercial products like passive RFID. Traditional backscatter systems like RFID, however, require specialized hardware to generate the excitation RF signals that backscatter radios can reflect, as well as to decode the backscattered signals. HitchHike enables this capability when talking to a commercial Wi-Fi access point, with no additional hardware. While recent research such as Wi-Fi backscatter, BackFi, and Passive Wi-Fi

Reflected Wave

Wi-Fi Transmitter

Tag

Transmitted Wave 56

technicacuriosa.com / Vol.1 • No.1

A backscatter tag communicates by modulating the scattered electromagnetic wave incident from the transmitter. The scattered wave is modulated by changing the electrical impedance presented to the tag antenna. A passive backscatter tag receives the power needed to operate from the wave incident from the transmitter.


Wi-Fi Wi-Fi Without Batteries

has reduced the need for specialized hardware, these systems are still not self-sufficient. Passive Wi-Fi, for example, can decode backscattered signal using standard Wi-Fi radios. However, it still requires a dedicated continuous wave signal generator as the excitation RF signal source. BackFi needs a proprietary full-duplex hardware add-on to Wi-Fi radios to enable backscatter communication. Inter-Technology Backscatter is a system that enables backscatter communication from a commercial Bluetooth radio to a commercial Wi-Fi radio. Despite its novelty, it does not enable backscatter communication among Wi-Fi radios. Consequently, a backscatter system that can be deployed using commodity Wi-Fi radios on access points, smartphones, watches, and tablets does not exist. HitchHike is, therefore, the first backscatter communication system that works using only commodity 802.11b Wi-Fi devices for both generating the RF excitation signal as well as decoding the backscattered signal. The enabling technique manifested in HitchHike is called codeword translation. Pengyu Zhang at Stanford found that a Wi-Fi signal is produced using a fixed set of codewords. In other words, a Wi-Fi signal is simply determined by a specific binary sequence which is the codeword in a codebook. Zhang found that one codeword in the codebook can be transformed to another by performing phase modifications. For example, 1 Mbps 802.11b transmission uses only two codewords, code0 and code1. Data zero and one are encoded as code0 and code1, respectively. The only difference between the two codewords is a 180° phase offset, which indicates whether data zero or one is transmitted. Inspired by this observation, Zhang designed a special backscatter tag, which

Wi-Fi Transmitter TX baseband

RX baseband

57

~

technicacuriosa.com / Vol.1 • No.1

Wi-Fi Signal

Prototype circuit block diagram.

Backscatter Tag

AMP logic

LNA

Reflected Signal

RF harvester


Wi-Fi Wi-Fi Without Batteries

is able to reflect a Wi-Fi signal from one Wi-Fi transmitter to a Wi-Fi access point. During the signal reflection, the tag embeds its information by performing or not performing codeword translation. For example, when the tag wants to transmit data one, the reflected codeword is different from the codeword in the excitation Wi-Fi signal. When the tag wants to transmit data zero, the reflected codeword is the same as the codeword in the excitation Wi-Fi signal. HitchHike’s deployment is best explained via the example in the illustration, below. The excitation device is a smartphone with a standard Wi-Fi radio. The smartphone transmits an 802.11b packet to the first AP to which it is connected on Channel 1. To backscatter, the tag receives the Wi-Fi packet, frequency shifts it to Channel 6, modulates its information, and then reflects the Wi-Fi signal. The second AP, which is tuned to listen on Channel 6, then receives and decodes the backscatter packet as a standard Wi-Fi packet. Pretty simple. This channel-shifting scheme eliminates interference between the original signal and the backscattered data stream coming from the HitchHike tag. The most important part of the HitchHike technology is that the reflected signal is still a valid Wi-Fi signal. Therefore, it can use a commodity Wi-Fi access point to decode the reflected signal and extract the tag information. Secondly, HitchHike does not waste spectrum; it piggybacks backscattered signals on Wi-Fi packets that are being used for productive communication. Hence, HitchHike can be efficiently deployed with current Wi-Fi infrastructure and unlicensed spectrum. The video, linked on the following page, demonstrates the use of an off-theshelf Intel Wi-Fi transmitter (black box on right) with an Apple MacBook Pro (left) left as a Wi-Fi receiver. The object in the middle is the HitchHike device prototype, which is connected to an electrocardiogram (heart rate) sensor.

58

technicacuriosa.com / Vol.1 • No.1

HitchHike enables backscatter communication between commodity 802.11b Wi-Fi radios.


Wi-Fi Wi-Fi Without Batteries

HitchHike samples the electrocardiogram data and piggybacks (hitchhikes) it on the Wi-Fi signal from the Intel Wi-Fi router. The Apple laptop receives, extracts, and displays the piggybacked signal in real time. Zhang further demonstrated that HitchHike can be used for transmitting biometric data at extremely low power. In this demo, Zhang connected an ECG (electrocardiogram) sensor to the HitchHike radio. The ECG sensing platform samples the ECG signal and backscatters the ECG data from an Intel 5300 Wi-Fi transmitter to an Apple Macbook Pro. The communication distance between the tag and the receiver can be as far as 40m, and with a data rate of nearly 300kbps at 10m. This novel technology has the potential of enabling, if not transforming, myriad applications in the Internet of Things domain, and certainly in mobile health sensing. To learn more about it, download the paper. PE

About Pengyu Zhang, PhD Pengyu Zhang is a postdoc researcher at Stanford. His research focuses on mobile computing and low-power wireless sensing. He obtained his Ph.D. from UMass Amherst and his bachelor’s from Tsinghua University. He received the Best Paper runner-up award at MobiCom 2014, the honorable mention award at UbiComp 2016, the 2016 ACM SIGMOBILE Doctoral Dissertation Award, and the 2016 Outstanding Doctoral Dissertation Award at UMass Amherst.

59

technicacuriosa.com / Vol.1 • No.1

The video demonstrates the use of an off-the-shelf Intel Wi-Fi transmitter (black box on right) with an Apple MacBook Pro (left) as a Wi-Fi receiver. The object in the middle is the HitchHike device prototype, which is connected to an electrocardiogram (heart rate) sensor. HitchHike samples the electrocardiogram data and piggybacks (hitchhikes) it on the Wi-Fi signal from the Intel Wi-Fi router. The Apple laptop receives, extracts, and displays the piggybacked signal in real time. Watch it now!


Wi-Fi

C

an we enable everyday objects in outdoor environments to communicate with cars and smartphones, without worrying about power? Such a capability could enable transformative visions such as connected cities and smart fabrics that promise to change the way we interact with objects around us. For instance, bus stop posters and street signs could broadcast digital

60

technicacuriosa.com / Vol.1 • No.1

By Shyam Gollakota


Wi-Fi FM Backscatter

content about local attractions or advertisements directly to a user’s car or smartphone. Posters advertising local artists could broadcast clips of their music or links to purchase tickets for upcoming shows. A street sign could broadcast information about the name of an intersection, or when it is safe to cross a street to improve accessibility for the disabled. Such ubiquitous low-power connectivity would also enable smart fabric applications—smart clothing with sensors integrated into the fabric itself could monitor a runner’s gait or vital signs and directly transmit the information to their phone. While recent efforts on backscatter communication dramatically reduce the power requirements for wireless transmissions, they are unsuitable for outdoor environments. Specifically, existing approaches either use custom transmissions from RFID readers or backscatter ambient Wi-Fi and TV transmissions. RFID-based approaches are expensive in outdoor environments given the cost of deploying the reader infrastructure. Likewise, while Wi-Fi backscatter is useful indoors, it is unsuitable for outdoor environments. Finally, TV signals are available outdoors, but smartphones, as well as most cars, do not have TV receivers and hence cannot decode the backscattered signals. Taking a step back, the requirements for our target applications are fourfold: 1) The ambient signals we hope to backscatter must be ubiquitous in

Bus stop posters and street signs could broadcast digital content about local attractions or advertisements directly to a user’s car or smartphone. Image courtesy of University of Washington.

61

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

outdoor environments, 2) devices such as smartphones and cars must have the receiver hardware to decode our target ambient signals, 3) it should be legal to backscatter in the desired frequencies without a license, which precludes cellular transmissions, and 4) in order to retrieve the backscattered data, we should ideally have a software-defined radio-like capability to process the raw incoming signal without any additional hardware.

FM Fits the Bill

Our key contribution is the observation that FM radio satisfies the above constraints. Broadcast FM radio infrastructure already exists in cities around the world. These FM radio towers transmit at a high power of several hundred kilowatts, which provides an ambient signal source that can be used for backscatter communication. Additionally, FM radio receivers are included in the LTE and Wi-Fi chipsets of almost every smartphone and have recently been activated on Android devices in the United States. Further, the FCC provides an exemption for low-power transmitters to operate on FM bands without requiring a license. Finally, unlike commercial Wi-Fi, Bluetooth, and cellular chipsets that provide only packet-level access, FM radios provide access to the raw audio decoded by the receiver. These raw audio signals can be used in lieu of a software-defined radio to extract the backscattered data. Building on this, we transform everyday objects into FM radio stations. Specifically, we design the first system that uses FM signals as an RF source for backscatter. We show that the resulting transmissions can be decoded on any FM receiver, including those in cars and smartphones. Achieving this is challenging for two key reasons: Unlike software radios that give raw RF samples, FM receivers output only the demodulated audio. This complicates decoding since the backscatter operation is performed on the RF signals corresponding to the FM transmissions while an FM receiver outputs only the demodulated audio. Thus, the backscatter operation has to be designed to be compatible with the FM demodulator. ■

FM stations broadcast audio that ranges from news channels with predominantly human speech to music channels with a richer set of audio. In addition, they can broadcast either a single stream (mono mode) or two

62

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

different audio streams (stereo mode) to play on the left and right speakers. Ideally, the backscatter modulation should operate efficiently with all these FM modes and audio characteristics. To address these challenges, we leverage the structure of FM radio signals. At a high level, we introduce a modulation technique that transforms backscatter, which is a multiplication operation on RF signals, into an addition operation on the audio signal output by FM receivers. This allows us to embed audio and data information in the underlying FM audio signals. Specifically, we use backscatter to synthesize baseband transmissions that imitate the structure of FM signals, which makes it compatible with the FM demodulator. Building on this, we demonstrate three key capabilities: Overlay backscatter • We overlay arbitrary audio on ambient FM signals to create a composite audio signal that can be heard using any FM receiver, without any additional processing. We also design a modulation technique that overlays digital data which can be decoded on FM receivers with processing capabilities, e.g., smartphones.

Stereo backscatter • A number of FM stations, while operating in the stereo mode, do not effectively utilize the stereo stream. We backscatter data and audio on these underutilized stereo streams to achieve a low interference communication link. Taking it a step further, we can also trick FM receivers to decode mono FM signals in the stereo mode by inserting the pilot signal that indicates a stereo transmission. This allows us to backscatter information in the interference-free stereo stream. ■

Cooperative backscatter • Finally, we show that using cooperation between two smartphones from users in the vicinity of the backscattering objects, we can imitate a MIMO system that cancels out the audio in the underlying ambient FM transmission. This allows us to decode the backscatter transmissions without any interference. ■

To evaluate our design, we first perform a survey of the FM signal strengths in a major metropolitan city and identify unused spectrum in the FM band, which we then use in our backscatter system. We implemented a prototype FM backscatter device using off-the-shelf components and used it to backscatter data and arbitrary audio directly to a Moto G1 smartphone

63

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

and a 2010 Honda CRV’s FM receiver. We compared the trade-offs for the three backscatter techniques described above and achieve data rates of up to 3.2 kbps and ranges of 5–60 ft. Finally, we designed and simulated an integrated circuit that backscatters audio signals, showing that, if fabricated, consumes only 11.07 µW.

Survey of FM Radio Signals

Public and commercial FM radio broadcasts are a standard in urban centers around the world and provide a source of ambient RF signals in these environments. In order to cover a wide area and allow for operation despite complex multipath from structures and terrains, FM radio stations broadcast relatively high-power signals. In the United States, FM radio stations produce an effective radiated power of up to 100 kW. We surveyed the signal strength of FM radio transmissions across Seattle, Washington. We drove through the city and measured the power of ambient FM radio signals using a software defined radio (SDR, USRP E310) connected to a quarter-wavelength monopole antenna (Diamond SRH789). Since there are multiple FM radio stations in most U.S. cities, we recorded signals across the full 88–108 MHz FM spectrum and identified the FM station with the maximum power at each measurement location. We calibrated the raw SDR values to obtain power measurements in dBm using reference power measurements performed with a spectrum analyzer (Tektronix MDO4054B-3). We divided the surveyed area into 0.8 mi× 0.8 mi grid squares and determined the median power in each for a total of 69 measurements.

Structure of FM Radio Transmissions

We leverage the structure of FM signals to embed audio and data. In this section, we outline the background about FM radio necessary to understand our design. ■ Audio Baseband Encoding

• Figure 1 illustrates the baseband audio signal transmitted from a typical FM station. The primary component is a mono audio stream, which is an amplitude modulated audio signal between 30 Hz and 15 kHz. With the advent of stereo speakers with separate left and right audio channels, FM stations incorporated a stereo stream transmitted in the 23 to 53 kHz range. To maintain backward compatibility, the mono

64

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

Stereo Audio L–R

Mono Audio L+

Figure 1 Structure of FM

RD

radio transmissions. Baseband audio signals transmitted

19k

in stereo FM

38k

57k

Frequency (Hz)

broadcasts.

audio stream encodes the sum of the left and right audio channels (L+R), and the stereo stream encodes their difference (L-R). Mono receivers only decode the mono stream, while stereo receivers process the mono and stereo streams to separate the left (L) and right (R) audio channels. The FM transmitter uses a 19 kHz pilot tone to convey the presence of a stereo stream. Specifically, in the absence of the pilot signal, a stereo receiver would decode the incoming transmission in the mono mode and uses the stereo mode only in the presence of the pilot signal. In addition to the mono and stereo audio streams, the transmitter can also encode the radio broadcast data system (RDS) messages that include program information, time, and other data sent at between 56 and 58 kHz. ■ RF Encoding • As

the name suggests, FM radio uses changes in frequency to encode data. Unlike packet-based radio systems such as Bluetooth and Wi-Fi, analog FM radio transmissions are continuous in nature. An FM radio station can operate on one of the 100 FM channels between 88.1 to 108.1 MHz, each separated by 200 kHz. Specifically, FM radio signals are broadcast at the carrier frequency fc in one of these FM bands, and information at each time instant is encoded by a deviation from fc.

Backscattering FM Radio

Backscattering FM radio transmissions and decoding them on mobile devices is challenging because FM receivers only output the decoded audio, while the backscatter operation is performed on the RF signals. To address

65

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

this, we show that by exploiting the FM signal structure, we can transform backscatter, which performs a multiplication operation in the RF domain, into an addition operation in the audio domain. FM BACKSCATTER CAPABILITIES In addition to overlay backscatter where the backscattered audio data is simply overlaid on top of the ambient FM signals, there are two additional backscatter techniques: stereo backscatter and cooperative backscatter. Stereo Backscatter • We considered two scenarios: 1) a mono radio station that does not transmit a stereo stream, and 2) a stereo station that broadcasts news information.

1 ■ Mono to Stereo • While many commercial FM radio stations broadcast stereo audio, some stations only broadcast a mono audio stream. In this case, all the frequencies corresponding to the stereo stream (15- 58 kHz in Figure 1) are unoccupied. Thus, they can be used to backscatter audio or data without interference from the audio signals in the ambient FM transmissions. Utilizing these frequencies, however, presents two technical challenges. First, the FM receiver must be in the stereo mode to decode the stereo stream, and second, FM receivers do not provide the stereo stream but instead only output the left and right audio channels (L and R). To address the first challenge, we note that FM uses the 19 kHz pilot signal shown in Figure 1 to indicate the presence of a stereo stream. Thus, in addition to backscattering data, we also backscatter a 19 kHz pilot signal. To address the second challenge, we note that FM receivers do not output the stereo stream (L-R) but instead output the left and right audio streams. To recover our stereo backscatter signal, all we have to do is compute the difference between these left (L) and right (R) audio streams. This allows us to send data/audio in the unoccupied stereo stream of a mono FM transmission. 2 ■ Stereo backscatter on news • While many FM stations transmit in the stereo mode, i.e., with the 19 kHz pilot tone in Figure 1, in the case of news and talk radio stations, the energy in the stereo stream is often low. This is because the same human speech signal is played on both the left and right speakers. We verified this empirically by measuring the stereo signal from four different radio stations. We captured the audio signals from these

66

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

stations for a duration of 24 hours and computed the average power in the stereo stream and compared it with the average power in 16-18 kHz, which are the empty frequencies in Figure 1. The signal plots confirmed that in the case of news and talk radio stations, the stereo channel has very low energy. Based on this observation, we can backscatter data/audio in the stereo stream with significantly less interference from the underlying FM signals. However, since the underlying stereo FM signals already have the 19 kHz pilot signal, we do not backscatter the pilot tone. Cooperative Backscatter • Consider a scenario where two users are in the vicinity of a backscattering object, e.g., an advertisement at a bus stop. The phones can share the received FM audio signals through either Wi-Fi direct or Bluetooth and create a MIMO system that can be used to cancel the ambient FM signal and decode the backscattered signal. Specifically, we set the phones to two different FM bands: the original band of the ambient FM signals ( fc) and the FM band of the backscattered signals ( fc + fback ).

Data Encoding with Backscatter

We encode data using the audio frequencies we can transmit using backscatter. The key challenge is to achieve high data rates without a complex modulation scheme to achieve a low-power design. High data rate techniques like OFDM have high computational complexity (performing FFTs) as well as a high peak-to-average ratio, which either clips the high amplitude samples or scales down the signal and, as a result, limits the communication ranges. Instead we use a form of FSK modulation in combination with a computationally simple frequency division multiplexing algorithm. ■ Data Encoding Process • At

a high level the overall data rate of our system depends on both the symbol rate and the number of bits encoded per symbol. We modify these two parameters to achieve three different data rates. We present a low rate scheme for low SNR scenarios as well as a higher rate scheme for scenarios with good SNR. 100 bps • We use a simple binary frequency shift keying scheme (2-FSK) where the zero and one bits are represented by the two frequencies, 8 and 12 kHz. Note that both these frequencies are above most human speech frequencies to reduce interference in the case of news and talk radio pro-

67

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

grams. We use a symbol rate of 100 symbols per second, giving us a bit rate of 100 bps using 2-FSK. We implement a non-coherent FSK receiver which compares the received power on the two frequencies and output the frequency that has the higher power. This eliminates the need for phase and amplitude estimation and makes the design resilient to channel changes. 1.6 kbps and 3.2 kbps • To achieve high bit rates, we use a combination of 4-FSK and frequency division multiplexing. Specifically, we use sixteen frequencies between 800 Hz and 12.8 kHz and group them into four consecutive sets. Within each of these sets, we use 4-FSK to transmit two bits. Given that there are a total of four sets, we transmit eight bits per symbol. We note that, within each symbol, there are only four frequencies transmitted amongst the designated 16 to reduce the transmitter complexity. We choose symbol rates of 200 and 400 symbols per second, allowing us to achieve data rates of 1.6 and 3.2 kbps. We note that our experiments showed that the BER performance degrades significantly when the symbol rates are above 400 symbols per second. Given this limitation, 3.2 kbps is the maximum data rate we achieve, which is sufficient for our applications. ■

Maximal-ratio combining • We consider the original audio from the ambient FM signal to be noise, which we assume is not correlated over time; therefore we can use maximal-ratio combining (MRC) to reduce the bit-error rates. Specifically, we backscatter our data N times and record the raw signals for each transmission. Our receiver then uses the sum of these raw signals in order to decode the data. Because the noise (i.e., the original audio signal) of each transmission are not correlated, the SNR of the sum is therefore up to N times that of a single transmission. ■

Implementation

We built a hardware prototype with off-the-shelf components, which we used in all our experiments and to build proof of concept prototypes for our applications. We then designed an integrated circuit-based FM backscatter system and simulated its power consumption. Off-the-Shelf Design • We use the NI myDAQ as our base-band processor, which outputs an analog audio signal from a file. For our FM modulator, ■

68

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

we use the Tektronix 3252 arbitrary waveform generator (AWG) which has a built-in FM modulation function. The AWG can easily operate up to 10s of MHz and can generate an FM modulated square wave as a function of an input signal. Interfacing the NI myDAQ with the AWG gives us the flexibility of using the same setup to evaluate audio and data modulation for both mono and stereo scenarios. We connect the output of the AWG to our RF front end, which consists of the ADG902 RF switch. We design the switch to toggle the antenna between an open and short impedance state. ■ IC Design

• In order to realize smart fabric and posters with FM backscatter, we translate our design into an integrated circuit to minimize both size and cost, and scale to large numbers. We implemented the FM backscatter design in the TSMC 65 nm LP CMOS process. A detailed description of the IC architecture is presented below. Baseband Processor • The baseband data/audio is generated in the digital domain using a digital state machine. We wrote Verilog code for the baseband and translated it into a transistor-level implementation using the Synopsis Design Compile tool. Our implementation of a system transmitting mono audio consumes 1 µW.

FM Modulator • The FM modulator is based on an inductor and capacitor (LC) tank oscillator with an NMOS and PMOS cross-coupled transistor topology. We leverage the fact that the capacitors can be easily switched in and out and design a digitally controlled oscillator to modulate the oscillator frequency. We connect a bank of 8 binary weighted capacitors in parallel to an off-chip 1.8 mH inductor and control the digital capacitor bank from the output of the baseband processor to generate a FM modulated output. We simulate the circuit using Cadence Spectre. Our 600 kHz oscillator with a frequency deviation of 75 kHz consumes 9.94 µW. ■

■ Backscatter Switch • We implement the backscatter switch using an NMOS

transistor connected between the antenna and ground terminals. The square wave output of the FM modulator drives the NMOS switch on and off, which toggles the antenna between open and short impedance states to backscatter FM modulated signals. We simulate the switch in Cadence to show it consumes 0.13 µW of power while operating at 600 kHz. Thus, the total power consumption of our FM backscatter system is 11.07 µW.

69

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

Audio Performance

Beyond data, our design also enables us to send arbitrary audio signals using FM backscatter. In this section, we evaluate the performance of FM audio backscatter. As before we set the FM transmitter to send four 8-second samples of sound recorded from local radio stations. To evaluate the quality of the resulting audio signals, we use perceptual evaluation of speech quality (PESQ), which is a common metric used to measure the quality of audio in telephony systems. PESQ outputs a perception score between 0 and 5, where 5 is excellent quality. We evaluate this metric with overlay, stereo, and collaborative backscatter. Audio with Overlay Backscatter • In the case of overlay backscatter, we have two different audio signals: one from the backscatter device and the second from the underlying FM signals. We compute the PESQ metric for the backscattered audio information and regard the background FM signal as noise. We repeat the same experiments as before where we change the distance between the backscatter device and the Moto G1 smartphone at different power levels of FM signals. We repeat the experiments 10 times for each parameter configuration and plot the results. The resulting plot shows that the PESQ is consistently close to 2 for all power numbers between -20 and -40 dBm at distances of up to 20 feet. We see similar performance at -50 dBm up to 12 feet. Unlike data, audio backscatter requires a higher power of -50 dBm to operate since one can perform modulation and coding to decode bits at a lower data rate at down to -60 dBm. We also note that in traditional audio, a PESQ score of 2 is considered fair to good in the presence of white noise; however, our interference is real audio from ambient FM signals. What we hear is a composite signal, in which a listener can easily hear the backscattered audio at a PESQ value of 2. We attach samples of overlay backscatter audio signals with PESQ values of 2.5, 2, 1.5, and 1, respectively, in the linked video for demonstration.

Using FM Receivers in Cars

We evaluate our backscatter system with an FM radio receiver built into a car in order to further demonstrate the potential for these techniques to enable connected city applications. The FM receivers built into cars have two distinct differences compared to smartphones. First, car antennas can be better optimized compared to phones as they have less space constraints,

70

technicacuriosa.com / Vol.1 • No.1


Wi-Fi FM Backscatter

Click here for the video presentation.

the body of the car can provide a large ground plane, and the placement and orientation of the antenna can be precisely defined and fixed unlike the loose wires used for headphone antennas. Because of this, we expect the RF performance of the car’s antenna and radio receiver to be significantly better than the average smartphone. Second, although recent car models have begun to offer software such as Android Auto or Apple CarPlay, the vast majority of car stereos are still not programmable and therefore limited to using our overlay backscatter technique. To test the performance of the car receiver with our backscatter system, we used an experimental setup similar to the one described in the full text paper (Section 5) for the smartphone. Specifically, we place the backscatter antenna 12 ft away from the transmitting antenna, which we configure to output a measured power, and evaluate the quality of the audio signal received in a 2010 Honda CRV versus range. Because the radio built into the car does not provide a direct audio output, we use a microphone to record the sound played by the car’s speakers. To simulate a realistic use case we perform all experiments with the car’s engines running and the windows closed. We backscatter the same signals used above to measure SNR and PESQ. The audio quality versus range for two different power values demonstrated that our system works well up to 60 ft. PE For more details, see the full text paper, FM Backscatter: Enabling Connected Cities and Smart Fabrics.

71

technicacuriosa.com / Vol.1 • No.1


Arduino

72

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

T

he RP is a complete small computer system with most of the common add-on features like a keyboard and video display supported from the outset. The RP also has a lot more resources available (e.g., program memory) and it has considerably more processing power. Depending upon which version of RP you select, prices are going to be a little under $40 for most versions. The Arduino family of microcontrollers are less ambitious in terms of user features. There is no native keyboard or video support. It has less memory and in many applications, runs without input from the user, and has less computing horsepower. Given these limitations, why would one ever choose an Arduino over a RP? The answer often boils down to economics: for many applications, Arduinos offer a more cost-effective solution. Table 1 presents several microcontrollers from the Arduino family and some of the resources they bring to the table. If you want to develop an application that monitors your fish-tank water temperature and can add fish food at predetermined intervals, which would you choose: a $40 RP or a $5

Table 1 • Arduino and compatible microcontrollers.

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Item Uno & Nano Mega 2560 Mega 2560 Pro Mini

Teensy 3.2

Teensy 3.5

Teensy 3.6

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Flash

32K 256K 256K 256K 512K 1M

SRAM

2K 8K 8K 64K 192K 256K

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

EEPROM 512B 4K 4K 2K 4K 4K

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Clock

16Mhz 16Mhz 16Mhz 72Mhz 120Mhz 180Mhz

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Digital pins 14 54 54 34 62 62

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Analog pins

8

16

16

21

25

25

Analog bits

10

10

10

13 bit

13 bit

13 bit

Analog output

PWM

PWM

PWM

1 @ 12 bits

2 @ 12 bit

2 @12 bit

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Timers 3 6 6 12 14 14

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

COM Serial 4 4 4 3 3 3

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

I2C

1

1

1

2

2

2

CAN

(Shield)

(Shield)

(Shield)

1

1

2

SPI

1

1

1

1

3

3

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

RTC

No No No No Yes Yes

Price

$5.00 $10.00 $15.00 $19.80 $24.25 $29.25

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

73

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

Nano? Both can do the job, so why pay more for the RP? You simply end up with a more expensive system with more underutilized resources. There are thousands of applications where either microcontroller would work equally well, so developers usually pick the less expensive alternative: the Arduino. Let’s take a close look at the resources available on several Arduinos and compatibles.

Arduino Resources To make an informed decision about which Arduino to use for a given project, you need to understand each microcontroller’s resource base. During this discussion, you should reference Table 1 for the details. Both the RP and the Arduino need to be programmed in order to do something useful. The good news is that there are thousands of applications already written that are available to you. Even better, the Arduino is “open source.” That means both the hardware and the programming environment are freely available. However, even if you just want to upload a program someone else has written, you will need to download the free Arduino Integrated Development Environment (IDE) from www.arduino.cc/en/main/software. The reason you need the IDE is because that’s the application that lets you move a program from a PC to the Arduino via a USB cable. There are IDE versions available for Windows, Mac, and Linux. We’ll have more to say about the Arduino IDE a little later. For now, let’s examine the resources buried within the microcontroller.

Flash Memory Flash memory is similar to those common thumb drives that you plug into your USB port. The key feature is that, if power is removed, flash memory retains whatever has been stored in it. Therefore, when you use the IDE to upload an Arduino program (also called a sketch), that program ends up being stored in flash memory. By storing the program in flash memory, the next time you apply power to the Arduino, the program is ready to begin execution. You’ll notice that the amount of flash memory seems pretty meager, ranging from 32 kilobytes to 1 megabyte. Given that most PCs today have a mega-munch of memory, what can you possibly do in just 32K? Actually,

74

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

quite a lot. We developed a program for an amateur radio transmitter/receiver that provides its transmit-receive frequency management, a menuing system that manages a rotary encoder switch for changing menu options, and all of the Input/Output (I/O) processing for the transceiver, including an LCD display. The amount of flash memory used for storing the program is about 9K (9 kilobytes). Because all Arduinos have a small program (called a bootloader) resident in flash memory to manage program uploads and execution, you really end up with about 30K of available flash memory for your own use. Still, given that our program code only used 9K, we still have about 21K free flash memory left to add new features if we choose to do so. In fact, the reality is that, in almost all cases, the amount of flash memory is not the real limitation of a program. The real bottleneck is the amount of SRAM memory available.

SRAM Memory SRAM stands for Static Random Access Memory. In an Arduino program, SRAM is where all of your program’s data ends up being stored. There are two problems with SRAM: 1) there’s not much of it available, and 2) it goes stupid when power is removed. Unlike flash memory that retains its bit pattern when power is removed, SRAM degenerates into some random bit pattern that results when power is reapplied. You can think of available SRAM as being partitioned into two pieces: 1) the heap, and 2) the stack. The total amount of SRAM is TOP OF Temp Data Temp Data STACK referred to as the stack, so you can think of the top of the stack Function (TOS) and the bottom of the Call & Data Free stack (BOS). However, the stack Memory Free needs to share the SRAM with Memory the heap space. The heap is a section of the stack space that is used to hold data that are stored globally in the program. GlobHeap Heap al data are those pieces of data BOTTOM OF STACK that you want the entire pro-

75

technicacuriosa.com / Vol.1 • No.1

Figure 1 • SRAM memory allocation between heap and stack.


Arduino Getting Started

gram to have access to. The memory demands for the heap are known by the Arduino compiler when you compile the program within the IDE (i.e., a compile-time factor). In Figure 1, the left and right depictions of the heap are the same size. They are the same size because heap size normally does not change during program execution. (There are functions for dynamic memory allocations, but they can fragment the heap.) Every Arduino program must have two functions named setup() and loop() present or it will not run. Within these two functions, the programmer will likely have defined at least some variables that the program will need to function properly (e.g., perhaps some variables to communicate with the PC via the USB cable). Because these two functions must be called by the program, those variables are known at compile time. Hence, both stack depictions show some level of temporary data shaded in red. If you choose to define more temporary data, the size of that red box would grow downward toward the BOS. If the program code contained within the loop() function calls another function, any data used by that function gets added to the bottom of the temporary data already defined. The more functions your program calls, the more data that gets pushed onto the stack. That is, each function puts its own “orange box” in the available stack space. If one function has a function call statement, and that function also calls another function, these “nested function calls” start adding more and more orange space to the stack. This means that the amount of “free memory” in the stack is shrinking. Now, guess what happens if the bottom of the temporary data collides with the top of the heap space? Well, all we can say for sure is that nothing good is going to happen. The really bad thing about “a stack collision” is that nothing obvious happens. There’s no white smoke; no alarms go off. Indeed, you may not even know it happened if the collision destroyed a

76

technicacuriosa.com / Vol.1 • No.1

Figure 2 • Arduino microcontrollers’ relative size compared to a pen. Displayed are the Uno (A), Mega 2560 (B), Nano (C), and the Mega 2560 Pro Mini (D).


Arduino Getting Started

piece of global data that your program never uses again. About the only evidence you may get from a stack collision is that your program may start acting “flaky.” That is, it doesn’t give the right results, but the bogus results may or may not produce the same consistently wrong values. Such program behavior is almost always an indication of a stack collision. My rule of thumb: buy as much SRAM as you can afford. If you’re sticking with the Arduino, that means an Arduino Mega 2560 or Mega 2560 Pro Mini. The relative sizes are shown in Figure 2 in case size impacts your design. By way of comparison, the Teensy controllers are about the size of the Arduino Nano, but about an inch longer.

EEPROM and Clock EEPROM is Electrically Erasable Programmable Read-Only Memory. Functionally, EEPROM is similar to flash memory in that it retains its information when power is removed. There are, however, several differences between EEPROM and flash memory. First, EEPROM can be changed at runtime, while flash is changed at compile time. This means, for example, you could ask your user which font size they want to use, store it in EEPROM, and the next time the program is started, you could read that font size from EEPROM and use it while the program runs. If you stored that same data in SRAM, it would disappear the instant you removed the power source. Second, EEPROM reading/writing is slower than flash or SRAM. Therefore, you would probably not want to use EEPROM for processor-intensive data. Third, there isn’t a lot of EEPROM available relative to the other memory types. It is, however, perfectly suited for “configuration-type” data (e.g., font size, dedicated port information, colors, editing constants, file names, etc.). Finally, EEPROM has a finite number of write cycles. Today’s EEPROMs have cycle counts in the hundreds of thousands, so it’s probably not a serious limitation. Note that all of the Arduinos are clocked at 16MHz, while all of the Teensys are at least 4x faster. MHz stands for megahertz, where a hertz is one complete clock cycle. Therefore, 16MHz means the Arduinos are clocked at 16 million cycles per second. Each program instruction burned into the chip takes so many clock cycles to complete. For example, adding two numbers together might take six clock cycles to perform. The microcontroller

77

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

clock speed, therefore, acts like a traffic light in determining how fast the microcontroller can execute program instructions and move data around the system. For example, if the Nano and the Teensy 3.6 are asked to add two numbers together and both use six clock cycles, ceteris paribus, the Teensy will do the calculation over 11x faster than the Nano simply because it has a faster clock. Because all of the Arduinos are clocked at the same speed, clock speed for them is a moot point. However, if you have an application that needs to process a lot of data quickly, perhaps a Teensy would be a better choice.

Other Resources All of the remaining items presented in Table 1 are usually of less concern than the memory limitations. The number of digital I/O pins determines how many input and output devices can communicate with the microcontroller. Note the huge jump between the Uno/Nano and the Mega 2560 device. Usually an I/O pin is connected to a sensor, switch, or some other external device. Keep in mind that the digital pins are low amperage devices and can only handle a maximum current of about 40mA. So, if you’re trying to light an LED, you’ll need to place a current limiting resistor (e.g., 470 ohms) between the LED and the pin. For higher current devices, it may be necessary to add additional circuitry. Note that eight pins are capable of analog inputs at 10 bits of resolution. These pins can have a voltage placed on them that varies between 0.0V and 5.0V. Because these pins have 10 bits of resolution, you can map the voltage to a value between 0 and 1023 (i.e., 210 = 1024). (Larger voltages can be measured using voltage divider circuits, but that’s beyond the scope of this article.) Note that the Mega boards offer twice the number of analog pins. Any analog pin can also function as a digital pin, but the reverse is not true. All boards provide a number of timers that can be used in programs that need things to occur at specific time intervals. Most of these internal timers work by counting clock ticks as the program executes. The millis() function, for example, returns the number of clock ticks that have occurred since the program began execution. Because the clock speed of the system is known, the function can be used to determine time intervals (e.g., seconds). Note,

78

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

however, that the granularity of the clock is determined by the clock speed. That is, you cannot get better clock resolution than the time it takes for one clock cycle. The COM, I2C, CAN, and SPI features are communications protocols that are supported by the microcontrollers. For example, if you wanted to connect a 16 characters by 2 line LCD display to an Arduino, this often requires eight digital I/O pins for it to work properly. The I2C interface, on the other hand, only requires two digital I/O pins. Another advantage of the I2C interface is that it can support multiple external devices. The Controller Area Network (CAN) bus is a standard ISO protocol and is popular in the auto industry. RTC is an abbreviation for Real Time Clock. The on-chip timers are usually measuring clock ticks and are expressed in micro- or milliseconds. Those on-chip timers don’t track the time of day, where an RTC does. These can be very handy if your project is one that syncs with the time of day. For example, if you need to portion out some pet food at 7AM and 6PM, you’d probably want to add an RTC to the Arduino. The Teensy family does have a built-in RTC.

External Devices Almost any non-trivial microcontroller application uses one or more external devices. For example, the pet food device would need an RTC and probably some kind of valve or relay system from which to dispense the pet food. How do you add these devices to an Arduino? There are dozens, if not hundreds, of devices that have been fabricated into Arduino “shields.” In Figure 2, for example, if you look closely at the Uno and Mega 2560, you can see two sets of header sockets on the upper and lower edges. In Figure 3, you can see the same header layout, and also the matching pins protruding through the

79

technicacuriosa.com / Vol.1 • No.1

Figure 3 • An RTC Arduino shield.


Arduino Getting Started

bottom of the shield. The pins on the shield fit into the header sockets on the Uno to form an “Arduino sandwich,” which is formed when a shield is mated to a microcontroller board. Shields like these fit the Uno and Mega 2560 boards, but not the Nano nor Mega 2560 Pro Mini boards. Because the shield in Figure 3 also has its own set of header sockets, it is possible to add another shield to the stack, forming an Arduino triple-decker sandwich. The shield in Figure 3 not only has an RTC, it also has a socket for a micro SD card and a button battery: in the event power is lost to the Arduino, the RTC can be maintained. Cost of the shield: about $10. You can buy a smaller (non-shield) RTC module that uses the I2C bus with battery backup for less than a dollar. These smaller modules can easily be interfaced to a Nano or Mega 2560 Pro Mini. Another benefit of the shield shown in Figure 3 is that it has a prototyping area on the board. You can use the field of “holes” that you see in the center of the board for any special circuitry you might need. If you look closely at Figure 3, you can see a row of holes that line up with the header socket holes. These holes are electrically tied to each header socket pin. This arrangement provides a convenient way to connect your prototype circuit to the host Arduino via the pins protruding through the bottom of the shield. Regardless of what your project is, there is probably an Arduino shield that can help. I just typed “Arduino shield” into eBay’s search box and got over 8,300 hits. From joy sticks to GPS systems, there is a huge variety of shields available.

Which Arduino Should I Choose? I don’t have a clue. The reason is because I don’t have any idea of what your project looks like. If I have a project that needs to rotate a solar panel so it consistently faces the sun, I think a Nano could handle that pretty easily. I’d need a few photosensitive sensors and lines to control a stepper motor. On the other hand, if I have sensors tied to 50 different places on an Indy car engine and I need to analyze that data in real time, I’d probably go with a Teensy simply because of the larger number of I/O pins and the higher clock speed that’s likely needed. Usually, I can make a reasonably intelligent choice just by thinking through the problem the project is supposed to solve. If it appears that my project

80

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

only requires four I/O channels at slow data speeds (like the sun’s movement), the Nano is likely a good (economical) choice. If the resource demands are such that a Nano can just barely do it, I’d go big, spend another five bucks, and platform the solution on a Mega. That way I’d have a little wiggle room should I need it later.

A Simple Arduino Project Let’s assume you aren’t sure about your level of interest in microcontrollers. It sounds interesting, but you don’t want to tie up a lot of money just to investigate it a little further. The project discussed in this section will set you back less than $5 if you shop for the parts online. The Nano is the Arduino board we’ll use. We’ll also use an LED and a 220-ohm resistor. Just about any kind of LED will work, and the resistor can be almost any value between 100 and 1,000 ohms. (However, the higher the resistance, the less bright the LED can become. Brightness is also influenced by the LED used.) I have purchased Nanos for as little as $2 each, quantity 5. Make sure you get the Nano and not the Nano Pro Mini. The Pro Mini version doesn’t have the USB connector and, hence, is a little more cumbersome to program. The circuit is shown in Figure 4. You want to connect the LED cathode lead to the resistor. The other end of the resistor is connected to ground (GND) on the Nano. The LED’s cathode lead is shorter than the anode lead, and the cathode lead is aligned with the flat edge of the LED’s plastic housing. The anode lead is connected to analog pin A5. Digital pins only have two states: HIGH and LOW. These correspond to +5.0V and 0.0V, respectively. Analog pins, on the other hand, can accommodate a range of voltages between 0.0 and 5.0V. Our project goal is to gradually

81

technicacuriosa.com / Vol.1 • No.1

Figure 4 • The demonstration circuit.


Arduino Getting Started

increase the brightness of the LED by supplying a rising voltage to the LED and then reverse the process and fade back down to the point where the LED goes out. Therefore, we need to choose a pin that supports analog operations. This project is not too ambitious, but, hey, you have to start somewhere.

Installing and Using the IDE Once the circuit is built, you need to download and install the Arduino IDE. If you need some help installing the software after you’ve downloaded it, there are a lot of tutorials on how to do that. The following link is just one example: learn.sparkfun.com/tutorials/installing-arduino-ide. I personally install the Arduino software in its own root directory. The latest download at the time of this writing is release 1.8.1. Therefore, for Windows, I would use something like:

C:\Arduino1.8.1 as the root directory name and download the file into the Arduino1.8.1 directory. I would then extract the zip file in that directory. You can choose whatever directory you wish. Once you’re done, connect a USB cable from your PC to the Nano. Now double-click on the arduino.exe file to start the IDE. Our program code to control the LED is shown in Listing 1. Listing 1. Program source code for LED program

int ledPin = 5; //define a pin for LED void setup() { Serial.begin(9600); //Begin serial communication pinMode(ledPin, OUTPUT ); } void loop() { int i;

82

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

for (i = 0; i < 255; i++) { Serial.print(“i = “); Serial.println(i); analogWrite(ledPin, i); delay(10); } delay(2000); while (i–) { Serial.print(“i = “); Serial.println(i); analogWrite(ledPin, i); delay(10); } delay(2000); }

// Run the next 4 statements 256 times //Write the value i to the serial monitor.

// Since i = 256 now, this also runs 256 times

// Wait two seconds and do it all again

You should copy or type this code into the IDE. When you are done, your IDE should look like that shown in Figure 5.

Figure 5 • The LED program source code in the IDE.

83

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

Reading the Source Code You may want to print out the source code before reading the next section of the article. The first line in the program defines a variable named ledPin. ledPin is a symbolic name we gave to the variable that we will use to communicate with the Nano I/O pin that connects to the LED. (Again, we chose pin 5 because not all I/O pins work properly with the analogWrite() function.) Next, when the program starts running, setup() is called first and its statements are executed. Every Arduino program must have the setup() and loop() functions to run properly. The Serial.begin(9600) statement establishes a connection between the Arduino Nano and your PC via the USB cable. It sets the data transfer rate between the two machines to 9600 baud. When you activate the Serial monitor using the small magnifying glass icon just below the X in the upper-right corner of the IDE (see Figure 4), you will see the baud rate display near the lower bottom-right of the Serial monitor. If the output displayed in the Serial monitor looks strange, the most common cause is a mismatched baud rate between the Serial monitor and your program. You can set it using the dropdown list box in the Serial monitor. The pinMode() statement simply tells the program that we want to use ledPin as an output pin. That is, we want to be able to write data to that pin. If we had needed to read data from an external device into the Nano, we would use INPUT instead of OUTPUT. Note: the C programming language used by the IDE compiler is case sensitive. That is output, Output, and OUTPUT are all viewed as different things by the compiler. Once the statements in setup() are finished, control is automatically transferred to loop(). The setup() function is only executed one time, and that’s when the program first starts running. The purpose of setup() is to establish the environment in which the program runs. Because we want to communicate with the PC, we establish the communication link and rate via the function call to Serial.begin(9600). We won’t be changing that as the program runs, so placing it in setup() and establishing the link one time is why we put the statement in setup(). The loop() function, however, is very different from setup(). loop() is designed to run forever. More precisely, the loop() function should run until: 1) power is removed, or 2) the reset button is pushed, which restarts the

84

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

program, or 3) there is a component failure. The first statement in loop() defines an integer variable name i. That is, we are going to use some SRAM (i.e., stack) memory to create a temporary variable named i. Next, the program begins executing a for statement block. A for statement block is designed to repeat all of the statement between the opening brace ({) and the closing brace (}) a specific number of times. The for loop is established with the statement:

for (i = 0; i < 255; i++) { Verbalizing this statement, we are saying, “Start this loop by initializing i to zero, and after each pass through the for loop statement block, check to see if i is still less than 255. If i is less than 255, repeat the statements and check me again when you’re done.” At the end of each delay(10) function call at the bottom of the for loop, control immediately transfers to the i++ expression at the end of the for loop statement. This expression simply takes the current value of i and adds 1 to it. It then executes the code to see if i is still less than 255. If it is, we execute the statements in the for loop block again. Inside the for loop statement block, the first two statements are function calls that send the current value of i to the Serial monitor so you can see its value as the program runs. The next statement uses analogWrite() to send the current value of i to the ledPin. The Nano translates the value of i into a voltage that can vary between 0V and 5V. This changing voltage causes the LED to become brighter as the value of i increases. The LED continues to get brighter until i is incremented to 255. When that happens, the i < 255 expression is no longer true, so the for loop stops executing and program control falls down to the delay(2000) statement. The delay() statement causes the program execution to pause for 2000 milliseconds (i.e., 2 seconds). (delay() uses an internal timer to measure the time interval.) This delay means the LED is shown at its maximum brightness for two seconds. After the 2-second program pause, program execution resumes with the while statement block. The expression i– decrements the current value of i. As long i is not zero, the statements in the while statement block continue to execute. Because the value of i was 255 when it left the for statement block, i is still 255 when it hits the while loop block. This means that the four statements within the opening and closing braces of the while statement block

85

technicacuriosa.com / Vol.1 • No.1


Arduino Getting Started

will be executed 255 times. Because the value of i is decremented (i.e., the i– expression) on each pass through the while loop body, what’s happening to the LED brightness? Yep…it continually get less and less bright because the value of i that’s passed to ledPin in the function call to analogWrite() is decreasing. This also means that the voltage to the LED is also decreasing. Eventually, the value of i is decremented to zero, which ends the while loop execution, and the LED is dark. However, the program doesn’t end because loop() runs forever. The compiler is smart enough to not redefine variable i (and unnecessarily use more SRAM), so execution starts all over again, beginning with the for loop code. The program continues to run until you stop it by removing its power source, or reset it, or some part gives up its magic white smoke. (To us non-electrical engineer types, all electronic circuits run on magic white smoke, and when a part in the circuit fails, it often releases its white smoke and the program ends.) Our LED example is a trivial application, but one that’s pretty easy to understand. However, given all of the I/O pins available and the gaggle of sensors and other devices that you can hang onto a microcontroller, you can probably think of dozens of interesting projects. In fact, I just googled “Arduino projects” and got 1.08 million hits. Whether you’re just trying to get an LED cube to dance to the music being played in the background or lock your front door over the phone from a thousand miles away, I’ll bet one of those projects is right up your alley. PE

ABOUT THE AUTHOR Dr. Jack Purdum attended Muskingum College (BA, Economics, 1965) and graduate school at The Ohio State University (MA, Economics, 1967; Ph.D., Economics, 1972). He began his teaching career at Creighton University in the Department of Economics in 1970, then moved to Butler University’s Econ department in 1974, and finally moved to Purdue University College of Technology in 2001. He became interested in microcomputers in 1975 and won a National Science Foundation grant to study microcomputers in education. He began writing programming books in 1982, mainly on the C programming language. He retired from Purdue University in 2008. Dr. Purdum recently finished his 18th book on C for microcontrollers. He enjoys playing golf and tinkering around with the Atmel family of microcontrollers.

86

technicacuriosa.com / Vol.1 • No.1

BUY



Computing

The Evolution of Computing from Processor-centric to Data-centric Models

By Trung Tran

T

he Intel 4004 was the world’s first microprocessor. Introduced in 1971, it was a 4-bit processor with a clock rate of 740KHz and could address 640 bytes of memory. Since then, the focus has been increasingly on clock rate and precision, with both parameters serving as corollaries to processor performance. Precision as defined by IEEE 754 has increased

88

technicacuriosa.com / Vol. 1 • No.1


Computing Data-Centric Model

from 4 bits to 8 bits to 16 bits (half-precision floating point) to 32 bits (single-precision floating point), to finally 64 bits (really 53 bits double precision). Clock rate has also increased exponentially from 740KHz to about 3Ghz today. Today’s mainstream processors are generally specified by the number of double-precision floating point operations that can be performed per second, thus combining both precision and clock rate together in a single metric called FLOPS, or floating point operations per second. According to Graph 500, shown in the table below, the fastest computer in the world runs at 93 Petaflops with a power budget of 15MW. Conversely, the bandwidth and latency of DRAM memory have not kept pace, yielding the memory wall detailed in Figure 1. Because memory bandwidth and latency have not kept pace with processing speed-ups, we have created a bottleneck between the processor and the information it is able to access across an external memory bus. Making this worse is Amdahl’s law as applied to parallel systems. Amdahl’s law states that in parallelization, if P is the proportion of a system or program that can be made parallel, and 1-P is the proportion that remains serial, then the maximum speedup that can be achieved using N number of processors is 1/((1-P)+(P/N). This –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Name Country

Teraflops

Power (kW)

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Sunway TaijuLight

China

93,015

15,371

Tianhe-2

China

33,863

17,808

Piz Daint

Switzerland

19,590

2,272

Titan

U.S.

17,590

8,209

Sequoia

U.S.

17,173

7,890

Cori

U.S.

14,015

3,939

Oakforest-PACS

Japan

13,555

2,719

K Computer

Japan

10,510

12,660

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Table 1. Graph 500 ranking of fastest supercomputers in the world.

89

technicacuriosa.com / Vol. 1 • No.1


Computing Data-Centric Model

Relative Performance Gap 100.000 10,000 1,000

Processor

100 10

Processor to Memory Gap*

Memory

1 1980 1985 190 1995 2000 2005 2010

YEAR Figure 1. Processor performance to memory gap increases 50% per year. means that the traditional way to speed up processing performance by creating large parallel systems is only minimally effective since we do not get a linear speed-up in performance as we add more processors. In fact, due to latency and limited IO bandwidth, we get much worse performance across these parallel systems. Figure 2 shows Amdahl’s law versus actual performance in a large-scale computing cluster. This exposes the IO bottleneck of the system as you try to share tasks across multiple processors, with the benefit of parallelization going down rapidly. Because of both the memory and the IO bottlenecks, Shekhar Bokhar, formerly of Intel Research, estimates that over 96% of the time and power consumed performing a calculation is spent moving data. That means no matter how many FLOPS your processor can handle, most of the time that processor is idle, waiting for data. So a 93 petaflop machine which is only 3-4% utilized really only delivers 3.7 petaflops worth of performance. That’s a lot of wasted potential.

90

technicacuriosa.com / Vol. 1 • No.1


Computing Data-Centric Model

80 70

fp = 0.99

60 50 Amdahl’s Law Reality

40 30 20 10 0

0 50 100 150 200 250

NUMBER OF PROCESSORS Figure 2. The effect of parallelization as more processors are added to a system. It also exposes the fallacy of current computer architectures which are far too processor-centric and do not consider how we actually process and handle data. A good example is a discussion I had with an engineer who built a large neural network which he claimed could scale to billions of neurons to emulate the human brain. He had a pretty neat processor atomic that he felt could scale well and was very energy efficient. I asked him how he would get data to those billion neurons. He answered that his 300Khz parallel bus would be sufficient to feed all those neurons. Really?! How does a 300Khz bus utilize one billion neurons running at 1 Ghz? This is the problem: everyone spends time on the multiply block or the algorithmic logic unit, but little if any attention is paid to the cache hierarchy or the memory controller. How are you going to feed the machine? We need to be less processor-centric and more data-centric. A data-centric computing world would be a truly transformative

91

technicacuriosa.com / Vol. 1 • No.1


Computing Data-Centric Model

paradigm. We can no longer rely on Moore’s Law to yield ever more performance. Previously, we believed we needed to solve problems with high precision using complex mathematical equations. In reality, though, for the majority of us (scientists aside), we need machines to help us understand and make decisions on data using simple math in extremely low precision. Take machine learning, for example. Most of today’s neural networks depend on simple linear algebra calculations which boil down to performing a multiply with an accumulate operation. These calculations are done repetitively since, the application of sparse matrix multiplication excepted, every value in matrix has to be calculated. The math doesn’t get much more complicated than that. Moreover, half precision is sufficient in most cases. Think about how much precision is needed to identify a cat in a video stream. The input data is 8-bit, so you certainly won’t need large floating point numbers to do the calculations. No matter how many digits follow the definition of a cat, it is still basically a yes or no answer. So if computation does not matter, what does? The answer in datacentric computing is efficient data movement. We need to process a lot of data quickly and efficiently. In Figure 3, we see the results of Data is growing at a 40 percent compound annual rate, reaching nearly 45 ZB by 2020.

Figure 3. Exponential data growth.

92

technicacuriosa.com / Vol. 1 • No.1


Computing Data-Centric Model

an Oracle study which shows the data growing exponentially year on year (40% CAGR), reaching 44 zettabytes in 2020. Ben Walker from Voucher Cloud estimates that we are generating 100 petabytes a second of data. I might add that most of the data is unstructured. That means that using the Sunway Taihulight—the fastest supercomputer in the world—it would take 4.58 months to process the initial 44 zettabytes. And once we were done, we’d have an additional 1,187 zettabytes to process. Obviously, we are not going to process all the world’s data on a single machine, but this does serve to illustrate the extent to which we are overwhelmed with data. In fact a Stanford study states that of the information we collect only 23% of it is useful; we only process or tag 3% of it and we only analyze 0.5% of the data. Figure 4 shows this breakdown based on a 2012 IDC presentation. The fact that we collect much more data than we can analyze— and that the volume of that data keeps growing—is why John Naisbitt stated, “We are drowning in information but starved for knowledge.” A processor-centric view of computing does not take into account this problem. All processor-centric computing asks is, how fast can I do a given calculation if the data is present, and at what precision? A data-centric view of computing asks the question, how

Era of Big (Wasted) Data

Figure 4. The era of big wasted data.

93

technicacuriosa.com / Vol. 1 • No.1


Computing Data-Centric Model

much data can I process in bytes and not in operations? A datacentric architecture has to answer the following questions: • How do I get data into my processor cores? • How do I efficiently map the data into working memory? • How do I localize or minimize the movement of data? “How do I get data into my system?” seems like a straightforward question on the surface, but it’s much more complicated than one might expect. All systems can get data to the processors, but given the level of data scale we’re facing now, speed and efficiency are the important factors. The traditional method of getting data into a compute cluster was to do a file transfer from one hard drive to another with the belief that all the captured data was important and that the processor will filter through that data to make decisions. The problem is that hard drives take 8,000 cycles to randomly access a single byte of information. This introduces a lot of latency in the system and is in addition to the IO performance latency of InfiniBand, PCI express, or Ethernet if going to network-attached storage (NAS). This makes processing petabytes of data nearly impossible. This is why most Hadoop-based systems take months to process information: it’s not the processing but the movement of data to and from hard drives. There has been a push to write directly into DRAM, which has a 200-cycle random access time—a 40X improvement in performance. If the machine is on all the time then it is easy to remove the need for non-volatile storage. Spark use of DRAM for micro-batching is one example of a speed-up based on switching from hard drives to DRAM. The problem then is directly writing to DRAM across a network and coming up with a lightweight protocol for moving data from DRAM to DRAM across the network. Remote direct memory access (RDMA) is one way to do this but is a heavyweight protocol for this task. Ideally, you would be able to manage large pools of DRAMs much like we manage hard drives today and efficiently allocate the memory. “How do I efficiently map data to memory?” is an important ques-

94

technicacuriosa.com / Vol. 1 • No.1


Computing Data-Centric Model

tion if you wish to minimize data movement and latency. The highest bandwidth between the processor and memory will always be direct-attached memory. If the CPU does not have to reach across a network then it does not have to deal with the overhead and bandwidth limitations of the network link. So if you can place memory where it is most likely to be used, you will minimize data movement and improve system throughput by minimizing data transactions to local memory transfers. The second consideration is the nature of SDRAM, which utilizes block transfers of up to eight. This means for a 64-bit bus, you transfer 64 bytes in a single burst, filling a 256-byte L2 cache in 4,000 cycles. This is efficient if the processor uses all 256 bytes without changing out the cache for a new page of memory. If the data is accessed randomly and therefore rarely residing in the cache then the cache will experience a miss. It is then said to be thrashing or constantly updating its page. This will introduce a large amount of memory access latencies into the memory transfer—essentially the time it takes the system to find the data in external memory. These cache misses can be minimized if the program prepositions memory ahead of time in order to maximize cache utilization. It implies a knowledge of the hardware and of how data is used by the processor cores. This task is often challenging for most programmers who rely on software schedulers to make these decisions for them. The final question concerns how to enforce memory locality. As stated earlier, it is important to be able to preposition data into DRAM. This reduces latency and ensures better processor utilization. Oftentimes though, the user or programmer does not know how to best map data or lacks a good understanding of available system resources and where the bottlenecks are. Since the data is going to be streaming into DRAM, it is also impossible to understand the data layout prior to processing. One way to solve this is to create active runtimes dedicated to efficiently moving data into and out of the system. The runtime is a machine-learning algorithm, which tracks the type of

95

technicacuriosa.com / Vol. 1 • No.1


Computing Data-Centric Model

calculations that are being done in machines and what data is needed to run them. It can intelligently pre-fetch the information and move it to local DRAM as needed. This means it really has to understand the memory workspace (or footprint) required for each task in the program and what available data is needed to accomplish those tasks. It then needs to pre-fetch the data in such a way as to prevent pipe stalls in the system. This is similar to a hypervisor function, but the resource being managed is not processor cores but memory footprints. The world is changing. As the importance and size of data grows, our traditional approach to computing needs to change. It can no longer be processor-centric, focusing on the type and complexity of the computation that needs to be done. Instead the approach to computing must be more data-centric, focusing on how much information or knowledge we can get out of the data and how fast we can process it. This means looking at the entire data chain from how we ingest data, to how we map it onto available resources, and ultimately to how we optimize the system for the amount of data that needs to be processed. The world is becoming more data-centric; our approach to computing should as well. PE

About the Author Trung Tran founded Clarcepto to give decision makers the ability to react in real time to events as they unfold and to make decisions that enable the best probability of success, allowing customers to take advantage of opportunities as they arise. Having a fuller picture of the world around us can enable increase sales, improve patient outcomes, and enhance service in a more intuitive way. Clarcepto is the capstone of a career spent designing information systems that transform data to knowledge. His career started in the Air Force, where he focused on sensor-to-shooter programs with the intent of providing key information to decisions makers. He then moved to Silicon Valley, where he spent 17 years designing and developing microchips and systems comprising hundreds of products that  have generated $1.2B in revenue. During his time in Silicon Valley, Mr. Trung got an MBA at Wharton to better understand the business drivers for technology. Prior to launching Clarcepto, he was a program manager at DARPA’s Microsystems Technology Office.

96

technicacuriosa.com / Vol. 1 • No.1


Raspberry Pi

D

o you have a personal fitness tracker that is not being put to good use? Do you struggle following through on

your New Year’s resolutions? We know it is a bit early for New Year’s resolutions, but we’d still like to try and help. In this article, we are

Physical Activity Motivation Tool using the

Raspberry Pi Zero

going to show you a solution that helps us maintain minimum physical activity (e.g., walking 10,000 steps a day) during the day and especially on weekends.

By Sai and Srihari Yamanoor

97

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

“Sitting is the new smoking� According to the World Health Organization, physical activity of 150 minutes a week can help you stay healthy. Recent studies have found that walking 10,000 steps a day can help avoid lifestyle diseases. Many of us have been making use of pedometers to keep track of our daily physical activity. It is difficult to maintain consistency in physical activity as we tend to ignore our personal health over daily commitments. For example, in a typical physical activity timeline shown below, you will note that the physical activity is concentrated toward the end of the day.

Physical activity in a day. (Data fetched from a commercially available pedometer.)

The advent of low-cost computers like the Raspberry Pi Zero (costs $5 to $10 depending upon the model) has enabled solutions that can improve our quality of life. We put together a visual aid that shows your progress toward achieving the daily step goal. This visual aid can be useful in reminding yourself to be physically active on those days where you either work from home or take a break from being glued to your computer for extended periods of time. A snapshot of the completed project is shown at right.

98

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

We recommend the following components for the project: ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

Item

BUY

BUY

BUY

Source

Cost (USD)

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– Raspberry Pi https://www.adafruit.com/product/3400 $10.00 Zero W –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– Pimoroni Blinkt https://www.adafruit.com/product/3195 $5.95 - LED Strip –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– Personal http://a.co/9eItwfX $46.00 Fitness Tracker (preferably Fitbit) –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– Picture Frame Any Art Store $13.00 with a Shadow Box –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– Note: We are assuming an intermediate proficiency in Python programming, electronic circuitry, and basic familiarity with the Raspberry Pi Zero and the Linux environment (i.e., familiarity with the command line terminal). Refer to our website (cited at the end of this article) for some tutorials.

We chose the Raspberry Pi Zero W because it is equipped with a Wi-Fi chipset, thus eliminating the need for a USB Wi-Fi Card. The Raspberry Pi Zero fetches the physical activity recorded by the fitness tracker using the Fitbit API. We make use of the Pimoroni Blinkt LED strip which consists of 8 RGB LEDs. The Pimoroni Blinkt LED strip lights up like a progress bar based upon the recorded physical activity. For example, let’s consider a scenario where someone has walked 5,500 steps and their daily step goal is 10,000 steps. The LED strip consists of eight LEDs and an LED lights up for every 1,250 steps. For 5,500 steps, four LEDs are lit up in green color and a fifth LED blinks red with a one-second interval indicating that you need to keep working toward your step goal. The first step is setting up the Raspberry Pi Zero. This includes flashing the OS image onto a microSD card, setting up the Wi-Fi credentials, etc.

99

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

The installation guide for setting up the Raspberry Pi Zero is available from https://raspberrypi.org/documentation. Do not forget to change the default password for your Raspberry Pi Zero!

Installing requisite software packages First you need to install the requisite Python packages. Since we are going to make use of the Fitbit tracker, we need to install the Fitbit Python client and libraries associated with the project. ■

sudo pip3 install fitbit cherrypy schedule

sudo apt-get install python3-blinkt

Note: If you have a tracker other than a Fitbit, you should find out if they provide developer resources. For example, Withings provides an API for its trackers: https:// oauth.withings.com/api.

Signing up for a Fitbit account The next step is signing for up a developer account at https://dev.fitbit.com and creating an application. 1 ■ Create a new application at dev.fitbit.com/apps:

2 ■ While registering a new application, fill in the description including the name of your application and give it a temporary description, organization, website, etc., and set the OAuth 2.0 Application Type to Personal and access type to Read-Only. Set the callback url to http://127.0.0.1:8080.

100

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

3 ■ Once your application is created, copy the client id and client secret from the application’s dashboard.

101

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

4 ■ From the Raspberry Pi’s command line terminal, download the following script: wget https://raw.githubusercontent.com/orcasgit/python-fitbit/master/ gather_keys_oauth2.py Note: The next step needs to be executed by opening the command line terminal from your Raspberry Pi Zero’s desktop (not via remote access) or your personal work station.

5 ■ Execute the script by passing the client id and client secret as arguments: python3 gather_keys_oauth2.py <client_id> <client_secret> 6 ■ It should launch a browser on your Raspberry Pi Zero’s desktop and direct you to a page on fitbit.com requesting your authorization to access your information.

7 ■ If the authorization was successful, it should redirect you to a page where the following information is displayed.

102

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

8 ■ Close the browser and copy the refresh_token and access_token information displayed on the command prompt.

Code for the visual aid In this section, we are going to write the code for the visual aid using the Fitbit API’s Python client as documentation: https://python-fitbit.readthedocs.io/en/latest/ Save the client id and client secret, refresh_token and access_token to a file called ‘config.ini’ in the following format: ■

[APP] CONSUMER_KEY = “REDACTED” CONSUMER_SECRET = “REDACTED” [USER] ACCESS_TOKEN = “REDACTED” REFRESH_TOKEN = “REDACTED”

The first step is importing the requisite libraries while building the visual aid. This includes the fitbit and blinkt libraries. We will also import some additional libraries required for scheduling tasks. ■

103

import blinkt import datetime import fitbit import time Load the access tokens from the config file called config.ini:

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

CONFIG_FILE = ‘/home/pi/config.ini’ config = configparser.ConfigParser() config.read(CONFIG_FILE) CONSUMER_KEY = config.get(“APP”, “CONSUMER_KEY”) CONSUMER_SECRET = config.get(“APP”, “CONSUMER_SECRET”) REFRESH_TOKEN = config.get(“USER”, “REFRESH_TOKEN”) ACCESS_TOKEN = config.get(“USER”, “ACCESS_TOKEN”)

A new refresh token is issued every eight hours. According to the API documentation, it is possible to save this token using a callback function. Hence, we need a callback function that saves the config token to the file: ■

def update_tokens(token):

if ( token[‘access_token’] != ACCESS_TOKEN or token[‘refresh_token’] != REFRESH_TOKEN ): config = configparser.ConfigParser() config.read(CONFIG_FILE) config.set(“USER”, “REFRESH_TOKEN”, token[‘refresh_token’]) config.set(“USER”, “ACCESS_TOKEN”, token[‘access_token’]) with open(CONFIG_FILE, “w”) as config_file: config.write(config_file)

According to the Fitbit API documentation, the current day’s physical activity can be retrieved using the intraday_time_series() method. ■

The required arguments for retrieving the physical activity include the resource that needs to be retrieved, e.g., steps, detail level, i.e., the smallest time interval for which the given information needs to be retrieved, start times, and the end times. ■

The start time is 12:00 a.m. of the current day and the end time is the current time. We will be making use of the datetime module to get the cur■

104

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

rent time. There is a function called strftime that gives us the current time in “hour:minute” format. def get_steps(client): num_steps = 0 try: now = datetime.datetime.now() end_time = now.strftime(“%H:%M”) response = client.intraday_time_series(‘activities/steps’, detail_level=’15min’, start_time=”00:00”, end_time=end_time) except Exception as error: print(error) else: str_steps = response[‘activities-steps’][0][‘value’] print(str_steps) try: num_steps = int(str_steps) except ValueError: pass return num_steps The daily step goal varies for each person. Hence let’s add a method that retrieves the daily step goal: ■

def get_goal(client): “”” Determine Daily step goal “”” num_steps = 0 try: response = client.activities_daily_goal() except Exception as error: print(error) return response[‘goals’][‘steps’]

105

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

In our main function, we need to initialize the Fitbit client using the client key, client secret, access_token, and refresh_token discussed earlier. â–

client = fitbit.Fitbit(CONSUMER_KEY, CONSUMER_SECRET, access_token=ACCESS_TOKEN, refresh_token=REFRESH_TOKEN, refresh_cb=update_tokens) We check for the steps every 15 minutes and light up the LEDs accordingly. Since the Pimoroni Blinkt consists of eight LEDs, we can light up one LED for every 1,250 steps of physical activity. â–

steps = get_steps(client) denominator = int(get_goal(client) / 8) while True: # update steps every 15 minutes if (time.time() - current_time) > 900: current_time = time.time() steps = get_steps(client)

for i in range(8): blinkt.set_pixel(i, 0, 0, 0) blinkt.show()

num_leds = steps // denominator

if num_leds > 8: num_leds = 8 for i in range(num_leds): blinkt.set_pixel(i, 0, 255, 0) blinkt.show()

106

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

if num_leds <= 7: blinkt.set_pixel(num_leds, 255, 0, 0) blinkt.show() time.sleep(1) blinkt.set_pixel(num_leds, 0, 0, 0) blinkt.show() time.sleep(1) For every multiple of 1,250 steps, we set an LED’s color to green using blinkt.set_pixel() method. We set the next LED to a blinking red. For example, at the time of writing this article, the total number of steps was 1,604. This is (1,250 x1) + 354 steps. Hence, we light up one LED in green and the next LED blinks red. This indicates that the steps are in progress. ■

The picture below shows the blinking red LED when the progress was less than 1,250 steps: ■

After walking around, the progress indicator shifted to the right by one LED at 1,604 steps: ■

107

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

The code file visual_aid.py, is available for download at: https://goo.gl/Qazno5 ■

In order to automatically launch the project on boot, make the python script executable and add it to /etc/rc.local. ■

# Print the IP address _IP=$(hostname -I) || true if [ “$_IP” ]; then printf “My IP address is %s\n” “$_IP” fi /home/pi/visual_aid.py exit 0

Building the visual aid We encased the Raspberry Pi Zero in a 3D-printed enclosure. We used the 3D print files available here: http://www.thingiverse.com/thing:1886598 ■

108

A picture frame with a shadow box is required to build the visual aid:

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

Next, we cut out a slot on the frame card as shown in the figure below:

We glued the Raspberry Pi Zero’s enclosure to the frame card and the shadow box:

Power up the Raspberry Pi Zero and install this visual aid somewhere prominent and find out if it motivates you to stay physically active! ■

You can check out the videos related to this visual aid at http://pywithpi.com. ■

109

technicacuriosa.com / Fall 2017


Raspberry PI Fitness Tool

Some Tips Consider setting off a buzzer when there is no minimum physical activity. This can be done by connecting a buzzer to the GPIO pins of the Raspberry Pi Zero using an NPN transistor.

About the Authors Sai Yamanoor is an embedded systems engineer working for a private startup school in the San Francisco Bay Area where he builds devices that help students achieve their full potential. He completed his graduate studies in mechanical engineering at Carnegie Mellon University in Pittsburgh, Pennsylvania, and his undergraduate work in mechatronics engineering from Sri Krishna College of Engineering and Technology in Coimbatore, India. His interests, deeply rooted in DIY and Open Software and Hardware cultures, include developing gadgets and apps that improve quality of life, Internet of Things, crowdfunding, education, and new technologies. In his spare time, he plays with various devices and architectures such as the Raspberry Pi, Arduino, Galileo, Android Devices, and others. Sai blogs about his adventures with mechatronics at the aptly named “Mechatronics Craze” blog at http://mechatronicscraze. wordpress.com/. You can find his project portfolios at http://saiyamanoor.com. Srihari Yamanoor is a mechanical engineer working on medical devices, sustainability, and robotics in the San Francisco Bay Area. He completed his graduate studies in mechanical engineering at Stanford University and his undergraduate studies in mechanical engineering from PSG College of Technology in Coimbatore, India. He is also severally certified in SolidWorks, Simulation, PDM, and Quality and Reliability Engineering and Auditing. His interests have a wide range, from BUY DIY, crowdfunding, AI, travel, and photography to gardening and ecology. In his spare time, he is either traveling across California, dabbling in nature photography, or at home, tinkering with his garden and playing with his cats. Sai and Sri have also authored the book Python Programming with Raspberry Pi.

110

technicacuriosa.com / Fall 2017


Ultrasonics

Note that we’ve retained the old conventions used in the original, e.g., kc versus MHz, μμf versus pf, r.f. versus RF, etc.

About this March 1963 feature, Popular Electronics asks, “How would you build this circuit today?” Share your ideas with us! The best idea will be rewarded with a

$1,000

shopping spree at

CLICK HERE FOR DETAILS

111

technicacuriosa.com / Vol. • No.1

Ultrasonic Sniffer March 1963 Cover Story BY DANIEL MEYER Here’s an old circuit with a new twist—basically a superheterodyne, it brings the not-so-audible world above 16,000 cycles to human ears. ARE YOU AWARE that a dog can hear sounds which, if you relied on your ears alone, you probably wouldn’t even know existed? This is because human ears—unlike dogs’ ears—aren’t sensitive to sounds much above 16,000 cycles. But even though you can’t normally hear these sounds, don’t assume that they aren’t worth listening to. Tune in on the “ultrasonic” frequencies between 38,000 and 42,000 cycles, for example, and a burning cigarette sounds like a forest fire; the “secret” noises of animals and insects are clearly audible; and a tiny leak in your car’s exhaust system becomes a steam whistle. If the idea intrigues you, you’ll want to build the “Ultrasonic Sniffer” described on these pages. Its ingenious transistorized circuit picks up sounds in the 38 – 42 kc. range and “translates” them into frequencies low enough to be perfectly audible. To test the Ultrasonic Sniffer, one of the first things we did after receiving the prototype model was to point it at a Bulova “Accutron”


Ultrasonics PE CLASSICS: Ultrasonic Sniffer

electronic watch. Appropriately (if startlingly) enough, we heard a periodic booming which sounded very much like Big Ben’s chimes. Similar surprises await you, so why take a back seat to Rover? Get out your soldering gun and start that flux flowing now!

About the Circuit The Ultrasonic Sniffer is similar in design to an ordinary superheterodyne receiver. Taking the place of the antenna is an ultrasonic transducer, or “microphone,” of the type used in TV remote-control systems. When the transducer is plugged into jack J1, it forms a tuned circuit with coil L1. The circuit, analogous to a radio set’s r.f. tuning coil and capacitor, resonates between about 37.5 and 42.5 kc. All sounds that the transducer picks up within this range are passed along to transistor Q1. Transistors Q1 and Q2, equivalent to the r.f. amplifiers of our hypothetical superhet receiver, amplify the 37.5 – 42.5 kc. ultrasonic signals. Oscillator Q4 provides a 37.5-kc. signal which is coupled to the base of mixer Q3 along with the 37.5 – 42.5 kc. signals from Q2. These signals combine in the mixer to generate “difference” frequencies between 0 and 5 kc. Capacitor C10 partially filters the “sum” frequencies from the output of

112

technicacuriosa.com / Vol. • No.1


Ultrasonics PE CLASSICS: Ultrasonic Sniffer

The Ultrasonic Sniffer circuitry consists of seven transistor stages with all parts mounted on a printed circuit board, except jacks J1 and J2, and volume control R18. Circled letters indicate terminal points on the printed circuit board that connect to circuit components mounted on the cabinet. Click the image above for a larger, printable version of the schematic.

Q3, but it leaves the 0 – 5 kc. “difference” frequencies virtually untouched. The “difference” frequencies retain the basic “sound pattern” of the original, inaudible, ultrasonic signals. They lie well within the normal audio range, however, and need only amplification to be heard. Here the analogy to a superhet breaks down, since the output of a superhet mixer is an i.f. signal which must be “detected” (to extract the original audio modulation) before it can be heard. The output of Q3 is fed to Q5, which is connected as an “emitter follower.” Transistor Q5’s main function is to provide a proper impedance match between Q3 and volume control R18. It also acts as another filter, helping C10 to attenuate unwanted ultrasonic signals. Audio signals from R18 feed transistor Q6, which is the audio amplifier. And the output of Q6 is fed to transistors Q7 and Q8, which form an “augmented emitter follower.” Their output, available at jack J2, will match a set of low-impedance headphones. Power for the circuit is supplied by 9-volt battery B1, and controlled by switch S1.

Parts List • B1—9-volt battery (Burgess 2MN6 or equivalent) • C1, C4, C11, C16—0.01-μf., 150-volt ceramic disc capacitor (Centralab DM-103 or equivalent) • C2, C5, C7—0.1-μf., 10-volt ceramic disc capacitor (Centralab UK10-104 or equivalent) • C3—2-μf., 6-volt electrolytic capacitor (Lafayette CF -161 or equivalent) • C6—820-μμf., 300-volt silvered-mica capacitor (Elmenco DM -15-681J or equivalent) • C8—470-μμf., 1000-volt ceramic disc capacitor (Centralab DD -471

113

technicacuriosa.com / Vol. • No.1


Ultrasonics PE CLASSICS: Ultrasonic Sniffer

Only those parts specified in the Parts List or exact replacements with the same physical size should be mounted on the printed circuit board. Any odd replacement parts may make wiring the unit difficult.

114

or equivalent) • C9, C15—30-μf., 6-w.v.d.c. subminiature electrolytic capacitor (Lafayette CF-167 or equivalent) • C10—0.005-μf., 150-volt ceramic disc capacitor (Centralab DD-M502 or equivalent) • C12—0.001-μf., 1000-volt ceramic disc capacitor (Centralab DD-102 or equivalent) • C13, C14—10-μf., 12-w.v.d.c. subminiature electrolytic capacitor (Lafayette CF-173 or equivalent) • C17, C18—50-μf., 12-w.v.d.c. subminiature electrolytic capacitor (Lafayette CF-176 or equivalent) • C19—100-μf., 12-w.v.d.c. subminiature electrolytic capacitor (Lafayette CF-177 or equivalent) • J1—Shielded phono plug (Lafayette MS-593 or equivalent) • J2—Phone jack to match plug on headset • L1, L2—Special ultrasonic coil (Admiral 69C251-1-A)* • Q1, Q2, Q3, Q4, Q5, Q6, Q7, Q8—2N1302 transistor

technicacuriosa.com / Vol. • No.1


Ultrasonics PE CLASSICS: Ultrasonic Sniffer

(Texas Instruments, G.E.) • R1, R4, R8—10,000 ohms • R2, R2—27,000 ohms • R3, R11, R12, R14, R15, R19, R21, R27—4700 ohms • R5, R6, R9, R17—2200 ohms • R7—33,000 ohms** • R10, R24—47,000 ohms** • R13—1000 ohms** • R16, R22—100 ohms** • R18—5000-ohm potentiometer, audio-taper (with s.p.s.t. switch S1) • R23, R28—220 ohms • R25—470 ohms • R26—68 ohms • S1—S.p.s.t. switch (on R18) • 1-1/4” x 3” x 2-1/8” aluminum utility box (Bud CU-2106-A or equivalent) • 1—Special printed circuit board* • 1—Ultrasonic transducer (Admiral78B147-1-G)* • 1—100-to-1000 ohm headset (Telex HFX-91 or equivalent) • —1-1/4” spacers, battery connector, knob, wire, solder, etc. *The coils, printed circuit board, and transducer are available from Daniel Meyer [address deleted] for a total of $10.00, postpaid. ** All resistors ½-watt, 10%, unless otherwise specified.

Construction Building the Ultrasonic Sniffer is a relatively simple job, thanks to the availability of a specially-etched circuit board (see Parts List). All holes in the printed board should be drilled with a No. 60 drill from the copper side of the board. The holes for L1 and L2 will have to be drilled with a slightly larger size drill since the lugs on these parts are larger than the leads on the other components. Because of the critical space limitations on the board, the Parts List gives rather complete specifications for all capacitors. Try to pick up the exact units listed—but if you can’t, be sure that whatever you do buy will fit. Keep in mind that the voltage ratings for the electrolytics are fairly critical, but anything over ten volts will do for the other capacitors.

115

technicacuriosa.com / Vol. • No.1


Ultrasonics PE CLASSICS: Ultrasonic Sniffer

Parts mounted on chassis cover are prewired with extra-long leads. Circled letters indicate connection points so indicated on schematic diagram.

There should be no problems in hooking up the components. The transistors and coils will only fit one way, and, with the exception of the electrolytics, there are no polarities to worry about. Polarities for each electrolytic capacitor must be carefully checked before you solder them in place. Use only rosin core solder when soldering components to the board. “Ersin” 60/40 is recommended and available from most radio supply stores. As a soldering aid, buff the board lightly with fine steel wool until the copper is bright and shiny; do this before inserting the components. Battery B1 is fastened in place by means of a piece of hookup wire. Pass the wire around B1 and through a couple of holes drilled in the board; then tie the ends of the wire together. The battery leads are wired to the board at points F and G (see schematic diagram) . The remainder of the construction is relatively easy. Potentiometer R18 and jacks J1 and J2 are mounted on a 5-1/4” x 3” x 2-1/8” aluminum utility box. The board is held in place in the box by means of four 1-1/4” spacers. These are installed at the corners of the box top and placed to mate with the mounting holes in the board. Run the “hot” leads from J1 and J2 to points A and C on the board, respectively. Potentiometer R18’s arm is wired to D; the “high” end of R18 goes to B; and switch S1’s terminals run to E and F. Finally, solder the ground lead (from the “frame” terminal of J2 and the “low” end of R18) to the ground bus which circles the board at its edges. Complete the construction by running a bead of solder around one of the 1-1/4” spacers where it touches the ground bus. This will ensure good contact between the board and the box.

Adjustment Before closing up the utility box, you must adjust coils L1 and L2. For this task, you’ll need a signal generator that can be tuned up to 38 kc. Begin by plugging a headset of 400 ohms impedance or higher into J2, and the

116

technicacuriosa.com / Vol. • No.1


Ultrasonics PE CLASSICS: Ultrasonic Sniffer

After the unit is wired (left, above), close up the chassis box to protect parts, then connect the microphone (right) to the jack J1, plug in the headset, and advance the volume control until a rushing sound is heard.

input transducer (microphone) into J1. This is important, as the tuning of L1 is affected by the transducer. Turn switch S1 on, and set potentiometer R18 to about the middle of its travel. Set the signal generator to 37.5 kc., turn the generator output control to minimum, and loosely couple the output lead to the insulated wire connecting J1’s center contact to L1. You can do this by looping two or three turns of the generator output lead around the J1-L1 lead wire. Now turn up the generator output control until a tone is heard in the phones, and adjust the slug of coil L2 until the tone “zero beats.” Next, retune the generator to 38 kc., producing a 500-cps tone in the phones. Adjust the slug of L1 for maximum headphone volume, reducing the generator output as necessary to avoid overloading. Remove the generator lead from the J1-L1 lead, install the chassis cover, and the Ultrasonic Sniffer is ready.

Operation and Applications Don’t expect to be overwhelmed with sound as soon as you turn the unit on. Though there are many ultrasonic sounds to be heard, these high frequencies are easily blocked and absorbed. Furthermore, the transducer element is quite directional. A good test for proper operation is to rub your fingers together (lightly) at arm’s length from the transducer. With R18 set at maximum, you should be able to hear the sound clearly. Now have someone jingle a bunch of keys from 10 to 15 feet away; this sound, too, should

117

technicacuriosa.com / Vol. • No.1


Ultrasonics PE CLASSICS: Ultrasonic Sniffer

be clearly heard. Insect and animal life provides a fascinating source of ultrasonic sound. Take the unit out to a wooded area some evening and probe around the trees and bushes. You should be rewarded with ultrasonic “signals” from tree locusts, tree frogs, and other wildlife. If you happen to live in an area where bats are common, you’ll be able to hear the pulses these animals send out to find their way or locate food. They begin at about 100 kc., then shift downward to about 20 kc., and can be detected as they pass through the sensitive range of the transducer. On a more practical level, gases escaping under pressure generate high intensities of ultrasonic sound. For this reason, the Sniffer makes an excellent “leak detector.” It can be used, for example, to check auto exhaust systems for tightness. The author has even employed the unit to set valve tappets on his car. Since the microphone is very directional, it can be aimed to hear sounds from one valve only. The tappet-adjusting nut is turned (while the engine is running) until no clicks are heard. One last tip: the instrument is an excellent tool for testing ultrasonic remote-control transmitters for TV sets. Each control button should produce a tone, and all tones should be of about the same magnitude. With a little experience, you’ll be able to spot malfunctions quickly. PE

About the Author Daniel Meyer (February 6, 1932 – May 16, 1998) was the founder and president of Southwest Technical Products Corporation. He earned a bachelor’s degree in mathematics and physics in 1957 from Southwest Texas State, and became a research engineer in the electrical engineering department of Southwest Research Institute. He soon started writing hobbyist articles. The first was in Electronics World (May 1960), and later he had a two-part cover feature for Radio-Electronics (October, November 1962). The March 1963 issue of Popular Electronics featured his ultrasonic listening device on the cover (featured here). The projects would often require a printed circuit board or specialized components that were not available at the local electronics parts store (by design!), but readers could purchase them directly from Dan Meyer. Dan Meyer saw the business opportunity in providing circuit boards and parts for the Popular Electronics projects. In January 1964 he left Southwest Research Institute to start an electronics kit company. He continued to write articles and ran the mail order kit business from his home garage in San Antonio, Texas. 118

technicacuriosa.com / Vol. • No.1



History of Technology

On The Origins Of

g AND-Gate Logic A By H.R. (Bart) Everett 120

technicacuriosa.com / Vol.1 • No.1

Serbian immigrant to the United States, Nikola Tesla is generally recognized as one of the most prolific inventors of all time, having introduced the concept of alternating current (AC) and subsequently designed the first hydroelectric AC power plant installation at Niagara Falls. Beginning in 1893, Tesla also produced and operated numerous radio-controlled contrivances in his New York laboratory. In 1897, he constructed a radio-controlled “teleautomaton” in the form of a model boat (Figure 1), which was privately demonstrated for investors at the first Electrical Exposition at Madison Square Garden in September of 1898.


History of Technology Origins of AND-Gate Logic

Figure 1 • Tesla’s radio-controlled boat, approximately 4 feet long by 3 feet high, employed a series of storage batteries E to power propulsion motor D (bottom center) and steering motor F (adapted from US Patent No. 613,809).

Tesla applied for a patent on his “Method of and Apparatus for Controlling Mechanisms of Moving Vessels or Vehicles” on July 1, 1898, and was awarded US Patent No. 613,809 on November 8, 1898, the significance of which is twofold. As of this writing, it is the first known reduction to practice of an unmanned radio-controlled system. Secondly, Tesla’s prototype was considerably more advanced in both theory and execution than other 19th-century designs that soon followed. Remote control of multiple subsystems was accommodated, for example, as opposed to the simplistic on-off execution of but a single function. Perhaps the most noteworthy implication of Tesla’s radio-controlled automaton, however, was demonstration of the basic “AND-gate” function that would become an indispensable element of all subsequent electronic and computer logic. His inspiration for this concept reportedly came from studying the work of Victorian biologist and philosopher Herbert Spencer regarding the combined action of two or more nerves in the human body. Tesla’s original implementation employed two sets of transmitters and receivers operating on different radio frequencies to trigger a pair of detector relays.

121

technicacuriosa.com / Vol.1 • No.1


History of Technology Origins of AND-Gate Logic

Figure 2 • Tesla’s dual-receiver design provided a relay-based AND-gate function that allowed the contacts of R3 to close only when both R1 and R2 were activated by their respective signals (digitally enhanced from US Patent No. 725,605, awarded April 14, 1903).

Both these relays had to close at the same time in order to energize a third, which in turn incremented a mechanical escapement driving a rotary switch to decode the command. The motivation behind this dual-channel approach, which Tesla dubbed “the art of individualization,” was to minimize the chances of false activation of the control circuitry due to outside interference. This logical-AND feature was not described in Tesla’s patent application dated July 1, 1898, however, for reasons he later explained in My Inventions (1919): “It is unfortunate that in this patent, following the advice of my attorneys, I indicated the control as being effected through the medium of a single circuit and a well-known form of detector, for the reason that I had not yet secured protection on my methods and apparatus for individualization.” Following successful resolution of a potential interference with a patent submitted by Reginald Fessenden, Tesla was finally awarded US Patent No. 725,605 on April 14, 1903. The AND-logic employed by the receiver circuitry depicted in this document is shown in Figure 2.

122

technicacuriosa.com / Vol.1 • No.1


History of Technology Origins of AND-Gate Logic

A similar logical-AND implementation, however, had been previously disclosed in a patent application submitted by English inventors Ernest Wilson and Charles John Evans on July 28, 1898. US Patent No. 663,400 was subsequently awarded to them on December 4, 1900. In his 1916 book, Radiodynamics, Benjamin Miessner indicates that Ernest Wilson was granted an English patent in 1897 for “wireless control of dirigible self-propelled vessels”: “The primary object of this invention was to provide a weapon for use in naval warfare, which, if in the form of a dirigible torpedo, controlled from a shore or ship wireless installation, would be most deadly in its effect on a hostile fleet.” Miessner claimed at the time that Wilson never reduced his idea to practice. On the other hand, in Chapter XXIX of History of Communications-Electronics in the United States Navy (1963), Captain (Ret.) Linwood S. Howeth reports: “In 1887, Englishmen E. Wilson and C. J. Evans were successful in controlling slow-moving boats by radio on the Thames River. In 1900 they were granted U.S. Patent No. 663,400 on their method.”

Figure 3 • Two separate broadband receivers were employed, their antennae pairs 1-2 and 6-7 arranged orthogonally (i.e., one vertical and one horizontal) for two-channel selectivity (adapted from US Patent No. 663,400, 1900).

123

technicacuriosa.com / Vol.1 • No.1


History of Technology Origins of AND-Gate Logic

Figure 4 • The two separate spark-gap transmitter configurations had their dipole antenna pairs 11-12 and 21-22 arranged orthogonally in similar fashion to the receiver (adapted from US Patent No. 663,400, 1900).

The cited date “1887” above was probably a typographical error that should have read “1897,” as their receiver design employed a coherer-type detector, which had not been introduced by Edouard Branly until 1890. Furthermore, the US patent application was submitted on July 28, 1898, for which the pair received US Patent 663,400 on December 4, 1900. Born in 1863, Ernest Wilson was an assistant professor in the Electrical Engineering Department at King’s College, London, from 1897 to 1898, at which time he was promoted to full professor. On March 26, 1898, he and Evans

Figure 5 • The AND-gate configuration of Wilson and Evans shows how circuit 49 was activated whenever relays 4 and 10 were both energized (adapted from US Patent No. 663,400, 1900).

124

technicacuriosa.com / Vol.1 • No.1


History of Technology Origins of AND-Gate Logic

were awarded British Patent No. 7,382 titled “Improvements in Methods of Steering Torpedoes and Submarine Boats.” Their radio-control scheme addressed steering only, employing two separate transmitters and receivers for deflecting the rudder both left and right. Two orthogonal antennae dipoles were used to achieve signal discrimination as shown in Figure 3. The accompanying dual-transmitter configuration is schematically presented in Figure 4. The high voltages generated by Ruhmkorff induction coils 15 and 23 were presented across spark gaps associated with antenna pairs 11–12 and 21–22, which were orthogonally aligned with respect to one another. The underlying theory of this arrangement was that the vertically oriented dipole 11–12 would have maximum coupling with the vertical receiving antenna 1–2 shown earlier in Figure 3, and minimal coupling with the horizontally aligned antenna 6–7. Conversely, the horizontally oriented transmitting antenna 21–22 would couple with receiving dipole 6–7 and not 1–2. The relay logic of Figure 5 was used to decode this either-or-both scenario to allow a third command state, the first two of course being turn left and turn right. The untuned receiver depicted on the left side of the diagram contains the standard components of battery 3, coherer 5, and relay 4 (which opens and closes circuit 48). An identical receiver configuration is shown on the right. Situated between them, however, is a third relay with two separate

Figure 6 • An alternative logical-AND configuration using double-pole-singlethrow detection relays to activate circuit 51 whenever relays 4 and 10 were both energized (adapted from US Patent No. 7,382, 1900).

125

technicacuriosa.com / Vol.1 • No.1


History of Technology Origins of AND-Gate Logic

field coils 19 and 20, individually wired in series with receiver relays 4 and 10, respectively. The sensitivity of this third relay was adjusted such that it would pull in only when both of its field coils were energized, which only happened when the two receivers simultaneously detected a control signal. This AND-gate function, which served to close circuit 49 when both receivers detected a radio signal, was fully described and illustrated in US Patent No. 663,400, filed July 28, 1898, which also included the alternative embodiment of Figure 6. As previously discussed, Nikola Tesla did not submit his comparable “art of individualization” patent application until some two years later on July 18, 1900, but afterwards claimed to have incorporated it on his first radio-controlled prototype boat. Surviving photographs of this vessel, however, reveal only a single antenna of the type depicted in the patent drawing reproduced earlier as Figure 1. PE

About the Author Commander (Ret.) H.R. (Bart) Everett is the former Technical Director for Robotics at the Space and Naval Warfare Systems Center Pacific in San Diego, California. In this capacity he has served as Technical Director for the Idaho National Laboratory (INL) Advanced Unmanned Systems Development Program funded by the Office of the Secretary of Defense (OSD), Technical Director for the Army’s Mobile Detection Assessment Response System (MDARS) robotic security program, and chief engineer for the USMC Ground Air Robotic System (GATORS). He is the former Director of the Office of Robotics and Autonomous Systems (SEA-90G), Naval Sea Systems Command, Washington, DC, and has been active in the field of robotics for over 50 years, with personal involvement in the development of over 40 mobile robotic systems, with an emphasis on sensors and autonomy. He has published more than 125 technical papers and reports (including several books), and has 21 related patents issued or pending. He serves on the editorial board for Robotics and Autonomous Systems magazine and is a member of IEEE and Sigma Xi. This article draws from his book, Unmanned Systems of World Wars I and II, MIT Press, 2015. Find him on Twitter: @HRBartEverett.

126

technicacuriosa.com / Vol.1 • No.1

BUY


Haptics

I

f seeing is believing, then feeling has got to be downright

convincing. But feeling something that really isn’t there? Ultrahaptics brings the long-neglected sense of touch to the virtual world. And now you can add this to your own products and customer experiences. Anyone taking in Disneyland’s 1994 3D attraction Honey, I Shrunk the Audience was treated to a glimpse of the future of virtual reality. By John Schroeter

The film was synchronized with several special effects built into the

The Future of the Touchscreen Is

Haptics Brings the Sense of Touch to the Virtual World 127

technicacuriosa.com / Vol.1 • No.1


Haptics Touchless Touchscreen

theater itself, including air jets mounted beneath the seats that pulsed shots of air right onto the ankles of the audience members, quite realistically simulating the feel of scurrying mice as they were virtually let loose in the theater. I still remember the screams of surprise. Things have come a long way since then. Air jets and vortices have given way to a new generation of technologies, including ultrasound, which enables highly nuanced tactile effects—haptics—and now promises to revolutionize user experiences in Augmented and Virtual Reality. Haptics is the science of touch. Ultrahaptics, the company, is taking haptics to a whole new realm, creating that sense of touch in midair. The applications of touch are as limitless as sight and sound: think virtual force fields, touchless dials complete with the clicking feel of detents, holographic buttons, sliders, and switches with which you can control a myriad of devices—your music, thermostat, lighting, your car’s infotainment system . . . pretty much anything. You can now interact in a natural, intuitive way with any virtual object. Founded in 2013, Ultrahaptics grew out of research conducted by Tom Carter as a student at the University of Bristol in the UK. There he worked

128

technicacuriosa.com / Vol.1 • No.1

Ultrahaptics’ technology uses ultrasound to generate a haptic response directly onto the user’s bare hands. Gesture controls can be used to operate infotainment systems, such as in-car audio and connected-car applications, more intuitively. Watch Ultrapahaptics in action here.


Haptics Touchless Touchscreen

solution works by generating focused points of acoustic radiation force—a

How will the driving of the future look? Bosch presented its vision at CES 2017 with a new concept car. Alongside home and work, connectivity is turning the car into the third living space. The concept car includes gesture control with haptic feedback. Developed with Ultrahaptics, the technology uses ultrasound sensors that sense whether the driver’s hand is in the correct place and then provide feedback on the gesture being executed. Image courte-

force generated when the ultrasound is reflected onto the skin—in the air

sy of Bosch. For more

over a display surface or device. Beneath that surface lies a phased array

on Bosch innova-

of ultrasonic emitters (essentially tiny ultrasound speakers), which produce

tions, click here.

under the supervision of computer science professor Sriram Subramanian, who ran a lab devoted to improving human-computer interaction. Subramanian, who has since moved to the University of Sussex, had long been intrigued by the possibilities of haptic technologies but hadn’t brought them to fruition for want of solving the complex programming challenges. That’s where Carter comes in. With the fundamental programming problems solved, the company’s

steerable focal points of ultrasonic energy with sufficient sound pressure to be felt by the skin. Using proprietary signal processing algorithms, the array of ultrasonic speakers or “transducers” generates the focal points at a frequency of 40kHz. The 40kHz frequency is then modulated at a lower frequency within the perceptual range of feeling in order to allow the user to feel the desired haptic sensation. Ultrahaptics typically uses a frequency from 1–300Hz, corresponding to the peak sensitivity of the tactile receptors. Modulation of the frequency is one of the parameters that can be ad-

129

technicacuriosa.com / Vol.1 • No.1


Haptics Touchless Touchscreen

justed by the API to create different sensations. The location of the focal point, determined by its three-dimensional coordinates (x, y, z), is programmed through the system’s API. Beyond creating a superior computer-human interaction with more intuitive, natural user experiences, the technology is also finding applications in use cases spanning hygiene (don’t touch that dirty thing), to accessibility (enabling the deaf and blind), to creating safer driving experiences. Really, the possibilities are endless: if you can control an electronic device by touch, chances are you can go touchless with haptics.

A Sound Foundation Ultrasound, in terms of its physical properties, is nothing more than an extension of the audio frequencies that lie beyond the range of human hearing, which generally cut off at about 20kHz. As such, ultrasound devices operate with frequencies from 20kHz on up to several gigahertz. Ultrahaptics settled on a carrier frequency of 40kHz for its system. Not only can humans not hear anything above 20kHz, we can’t feel them, either. The receptors within human skin can only detect changes in

130

technicacuriosa.com / Vol.1 • No.1


Haptics Touchless Touchscreen

intensity of the ultrasound. The 40kHz ultrasound frequency must therefore be modulated at far lower frequencies that lie within the perceptual range of feeling, which turns out to be a fairly narrow band of about 1-400Hz. As to how we feel ultrasound, haptic sensation is the result of the acoustic radiation force that is generated when ultrasound is reflected. When the ultrasound wave is focused onto the surface of the skin, it induces a shear wave in the skin tissue. This in turn triggers mechanoreceptors within the skin, generating the haptic impression. (Concern over absorption of ultrasound is mitigated by the fact that 99.9% of the pressure waves are fully reflected away from the soft tissue.)

Feeling is Believing Ultrahaptics’ secret sauce, as you might imagine, lies in its algorithms, which dynamically define focal points by selectively controlling the respective intensities of each individual transducer to create fine haptic resolutions, resolving gesture-controlled actions with fingertip accuracy. When several transducers are focused constructively on a single point—a

131

technicacuriosa.com / Vol.1 • No.1

How does Ultrahaptics’ Mid-air touch with ultrasound work? The modulated ultrasound waves, which are precisely controlled, are transmitted from an array of transducers such that the resulting interference pattern creates focal points in midair, as indicated by the green. See the full video here.


Haptics Touchless Touchscreen

point being defined by its x, y, z coordinates—the acoustic pressure increases to as much as 250 pascals, which is more than sufficient to generate tactile sensations. The focal points are then isolated by the generation of null control points everywhere else. That is, the system outputs the lowest intensity ultrasound level at the locations surrounding the focal point. In the algorithm’s final step, the phased delay and amplitude are calculated for each transducer in the array to create an acoustic field that matches the control point, the effect being that ultrasound is defocused everywhere in the field above or below that controlled focus point. Things get more interesting when modulating different focal points at different frequencies to give each individual point of feedback its own independent “feel.” In this way the system is not only able to correlate haptic and visual feedback, but a complete solution can attach meaning to noticeably different textures so that information can be transferred to the user via the haptic feedback. The API gives the ability to generate a range of different sensations, including: • Force fields: a use case for this could include utilization in domestic appliances, for example a system warning that a user is about to put their hand on a hob that is not completely cool. • Haptic buttons, dials, and switches: these are particularly interesting in the automotive industry, where infotainment controls, for example, can be designed to be projected onto a user’s hand without the driver having to look at the dashboard. • Volumetric haptic shapes: in a world where virtual and augmented reality could become part of our everyday lives, one of the missing pieces of the puzzle is the ability to feel things in a virtual world. Ultrahaptics’ technology can generate different shapes, giving a haptic resistance when users, immersed in a virtual world, are expecting to feel an object. • Bubbles, raindrops, and lightning: the range of sensations that can be generated is vast; it can range from a “solid” shape to a sensation such

132

technicacuriosa.com / Vol.1 • No.1


Haptics Touchless Touchscreen

as raindrops or virtual spiders. As well as being of interest to the VR gaming community this is also something that will be extremely interesting for location-based entertainment. These sensations are generated by modulation of the frequency and the wavelength of the ultrasound, and these options are some of several parameters that can be adjusted by the API to create different sensations. The location of the focal point, determined by its three-dimensional coordinates, is also programmed via the system’s API.

Gesture Tracking gets Touchy Gesture control, of course, requires a gesture tracking sensor/controller— for example the Leap Motion offering, which Ultrahaptics has integrated into its development and evaluation system. The controller determines the precise position of a user’s hands—and fingers—relative to the display surface (or hologram, as the case may be). Stereo cameras operating with infrared and augmented to measure depth provide high-accuracy 3D spatial representation of gestures that “manipulate” the active haptic field. The system can use any camera/sensor; the key is its ability to reference the x, y, z coordinates through the Ultrahaptics API.

133

technicacuriosa.com / Vol.1 • No.1


Haptics Touchless Touchscreen

In the Interest of Transparency Another key component of a haptic feedback system is the medium over which the user interacts—in this case, a projected display screen or device, beneath which is the transducer array. The chief characteristic of the display surface/device is its degree of acoustic transparency: the display surface must allow ultrasound waves to pass through without defocusing and with minimum attenuation. The ideal display would therefore be totally acoustically transparent. The acoustics experts at Ultrahaptics have found that a display surface perforated with 0.5mm holes and 25% open space reduces the impact on the focusing algorithm while still maintaining a viable projection surface. In time, we may see acoustic metamaterials come into play. By artificially creating a lattice structure within a material, it is possible to correct for the refraction that occurs as the wave passes through the material. This would enable the creation of a solid material that permits a selected frequency of sound to pass through it. A pane of glass manufactured with this technique would provide the perfect display surface. It has also been shown that such a material could enhance the focusing of the ultrasound by acting as an acoustic lens. But, again, we’ll have to wait for this; acoustic metamaterial-based solutions are only beginning to emerge. In

134

technicacuriosa.com / Vol.1 • No.1


Haptics Touchless Touchscreen

the meantime, surface materials that perform well include woven fabrics, such as those that would be used with speakers; hydrophobic acoustic materials, including the range from Saati Acoustex, which also protect from dust and liquids; and perforated metal sheets.

Generating 3D Shapes Ultrahaptics’ system is not limited to points, lines, or planes; it can actually create full 3D shapes—shapes you can reach out to touch and feel such as spheres, pyramids, prisms, and cubes. The shapes are generated with a number of focal points projected in a (x, y, z) position and move as their position is updated at the chosen refresh rate. Ultrahaptics continues to investigate this area to push the boundaries of what can be achieved.

Evaluation & Development Program For developers looking to experiment and create advanced prototypes with 3D objects or AR/VR haptic feedback, the Ultrahaptics Evaluation

135

technicacuriosa.com / Vol.1 • No.1


Haptics Touchless Touchscreen

Programme includes a development kit (UHEV1) providing all the hardware peripherals (transducer array, driver board, Leap Motion gesture controller) and software, as well as technical support to generate custom sensations. The square form factor transducer platform comprises an array of 16×16 transducers—a total of 256 ultrasound speakers—driven by the system’s controller board, a system architecture consisting of an XMOS processor for controlling the transducers. The evaluation kit offers developers a plug and play solution that will work in conjunction with any computer that operates Windows 8 or above, or OSX 10.9 or above. In short, the kit has everything you need to implement Ultrahaptics in your product development cycle. The UHDK5 TOUCH development kit is available through Ultrahaptics distributor EBV and includes a transducer array board, gesture tracking sensor, software suite, and fully embedded system architecture (microprocessor and FPGA). The Sensation Editor library includes a range of sensations that can be configured to individual design requirements as well as enabling the development of tailored sensations using the software suite provided.

136

technicacuriosa.com / Vol.1 • No.1

MI

BUY


Virtual Reality

The Fundamental Interface for VR/AR

V

irtual and augmented reality (VR/AR) are the next great frontiers in computing. Like the internet revolution of the 1990s and the mobile revolution of the 2000s, it represents a radical shift in how we live, work, play, learn, and connect. The next major step in this evolution is the second generation of mobile VR, which is just around the corner. But one major challenge remains—how we interact with it.

By Alex Colgan

137

technicacuriosa.com / Vol.1 • No.1


Virtual Reality Inside Leap Motion

View of the HTC Vive headset. Image courtesy of HTC.

It’s hard to imagine the internet revolution without the mouse and keyboard, or the mobile revolution without the touchscreen. Input is an existential question inherent in each new computing platform, and answering that question means the difference between mainstream adoption and niche curiosity. At Leap Motion, we’ve developed an intuitive interaction technology designed to address this challenge—the fundamental input for mobile VR.

The VR Landscape in 2017 (and Beyond)

Google Daydream mobile VR headset. Image courtesy of Google.

138

We live in the first generation of virtual reality, which consists of two major families: PC and mobile. PC headsets like the Oculus Rift and HTC Vive, running on powerful gaming computers and packed with high-quality displays, enable premium VR experiences. Using external sensors scattered around a dedicated space, your head movements and handheld controllers can be tracked as you walk through virtual worlds. This gives you six degrees of freedom—orientation (pitch, yaw, roll) and position (X, Y, Z). Mobile headsets like the Gear VR and Google Daydream are not tethered to a computer. Most use slotted-in mobile phones to handle processing demand, so their capabilities are limited compared to their PC cousins. For instance, they have orientation tracking, but not positional tracking. You can look around in all directions, but you can’t move your head freely within the space. Despite these limitations, mobile VR has several major advantages over PC VR. Mobile VR is more accessible and less expensive, and it can be taken anywhere. The barriers to entry are lower. The user community is already an order

technicacuriosa.com / Vol.1 • No.1


Virtual Reality Inside Leap Motion

An exploded view of the Leap Motion Controller.

of magnitude greater than PC VR—millions, rather than hundreds of thousands. Finally, and most importantly, product cycles in the mobile industry are much faster than in the PC space. Companies are willing to move faster and take more risks, partly because people are willing to replace mobile devices more often than their computers. This means that second-generation mobile VR will explode many of the first generation’s limitations—with new graphics capabilities, positional tracking, and more. This brings us back to the critical challenge of 2017. The VR industry is at a turning point where it needs to define how people will interact with virtual and augmented worlds. The earliest devices featured buttons on the headsets and used gaze direction. More recently, we’ve seen little remotes capable of orientation tracking, though these aren’t tracked or represented in 3D virtual space. By themselves, it’s hard to make these inputs feel compelling. Handheld controllers are acceptable for PC-tethered experiences because people “jack in” from the comfort of dedicated VR spaces. They’re also great optional accessories for certain game genres. But what about airports, buses, and trains? What happens when augmented reality arrives in our homes, offices, and streets?

We Need to Free Our Hands Leap Motion has been working to remove the barriers between people and technology for seven years. In 2013, we released the Leap Motion

139

technicacuriosa.com / Vol.1 • No.1


Virtual Reality Inside Leap Motion

An exploded view of a VR headset with the Leap Motion Mobile VR Platform (second from left) as a core component.

The Leap Motion Mobile VR Platform hardware.

140

Controller, which tracks the movement of your hands and fingers. Our technology features high accuracy and extremely low latency and processing power. At the dawn of the VR revolution in 2014, we discovered that these three elements were also crucial for total hand presence in VR. We adapted our existing technology and focused on the unique problems of head-mounted tracking. In February 2016, we released our third-generation hand tracking software specifically made for VR/AR, nicknamed “Orion.” Orion proved to be a revolution in interface technology for the space. Progress that wasn’t expected for 10 years was suddenly available overnight across an ecosystem of hundreds of thousands of developer devices, all on a sensor designed for desktop computing. In December 2016, we announced the Leap Motion Mobile Platform—a combination of software and hardware we’ve made specifically for untethered, battery-powered VR/AR devices. The challenges to build a tracking platform for mobile VR have been immense. We needed a whole new sensor with higher performance and lower power. We needed to make the most sophisticated hand tracking software in the world run at nearly 10 times the speed. And we needed it to be capable of seeing wider and farther than ever, with a field of view (FOV) of 180×180 degrees. With all that in mind, here’s a quick look at how the technology brings your hands into new worlds.

Hardware Leap Motion has always taken a hybrid approach with hardware and software. From a hardware perspective, we have built up a premier com-

technicacuriosa.com / Vol.1 • No.1


Virtual Reality Inside Leap Motion

A floating hand menu from the PC version of Blocks.

puter vision platform by optimizing simple hardware components. The heart of the sensor consists of two cameras and two infrared (IR) LEDs. These cameras are placed 64 mm apart to replicate average inter-pupilary distance between a human’s eyes. In order to create the largest tracking volume possible, we use wide-angle fisheye lenses that see a full 180×180-degree field of view. Because the module will be embedded in a wide range of virtual and augmented reality products, it was important to make it small. Despite the extremely wide field of view, we were able to keep the module under 8 mm in height.

The Leap Motion Mobile VR Platform Hardware The module is embedded into the front of the headset and captures images at over 100 frames per second when tracking hands. The LEDs act as a camera flash, pulsing only as the sensor is creating a short exposure. The device then streams the images to the host processor with low latency to be processed by the Leap Motion software.

141

technicacuriosa.com / Vol.1 • No.1


Virtual Reality Inside Leap Motion

Software Once the image data is streamed to the processor, it’s time for some heavy mathematical lifting. Despite popular misconceptions, our technology doesn’t generate a depth map. Depth maps are extraordinarily compute intensive, making it difficult to process with minimal latency across a wide range of devices, and they are information-poor. Instead the Leap Motion tracking software applies advanced algorithms to the raw sensor data to track the user’s hand. In under three milliseconds, the software processes the image, isolates the hands, and extracts the 3D coordinates of 26 joints per hand with great precision. Because Leap Motion brings the full fidelity of a user’s hands into the digital world, the software must not only track visible finger positions, but even track fingers that are occluded from the sensor. To do that, the algorithms must have a deep understanding of what constitutes a hand, so that not seeing a finger can be just as important to locating the position as if it were fully visible. After the software has extracted the position and orientation for hands, fingers, and arms, the results—expressed as a series of frames, or snapshots, containing all of the tracking data—are sent through the API. From there the application logic ties into the Leap Motion input. With it, you can reach into VR/AR to interact directly with virtual objects and interfaces. To make this feel as natural as possible, we’ve also developed the Leap Motion Interaction Engine—a set of parallel physics designed specifically for the unique challenge of real hands in virtual worlds. Virtual reality has been called “the final compute platform” for its ability to bring people directly into digital worlds. We’re at the beginning of an important shift as our digital and physical realities merge in ways that are almost unimaginable. Today magic lives in our phones and in the cloud, but it will soon exist in our physical surroundings. With unprecedented input technologies and artificial intelligences on the horizon, our world is about to get a lot more interactive—and feel a lot more human. PE

142

technicacuriosa.com / Vol.1 • No.1



Virtual Reality

144

technicacuriosa.com / Vol.1 • No.1


Virtual Reality Unifying Technology

I

want to paint a picture of VR beyond the stereotype and posit that rather than isolating, VR is going to be the most unifying communications medium that’s ever existed. Rather than pulling us further apart,

VR is going to bring us closer together.

VR is Going to Bring us Closer Together

Transitions in communications technologies have been met with skepticism throughout history. Plato worried that moving from oral to written history would inhibit our reasoning abilities, and Thoreau thought communications by telegraph would be too mundane to matter. Images of seas of people with black boxes strapped to their heads do little to help counter the perception that VR is a tool for escape from others. Despite that skepticism, we have constantly innovated on faster and richer ways to communicate. Sharing experiences with friends, loved ones, and colleagues is one of the most emotionally rewarding parts of life. So even while we fear that new technology may disconnect us, we have continued to strive for, and find, deeper ways to connect.

VR is Worth a Thousand Words

Let’s talk about what we mean when we say, “VR is a communications medium.” Think of a spectrum representing ways we communicate. On one end let’s put “sending a letter,” and on the other let’s put “being together in person.” We would all agree that writing a letter is dramatically different than talking in person, but, specifically, what do we gain by being together? • Non-verbal communication: We point, gesture, talk with our hands, lean toward or away from each other, fold our arms, cross our legs, make eye contact, and hear each other in directional, binaural audio. • Personal space: We are intimately aware of the distance between us and others, and this distance, or closeness, makes us feel emotionally connected. • Interactivity: When we’re in the same place, we can simultaneously interact with our environment—I can hand you something, show you something, or move something and you’ll notice.

145

technicacuriosa.com / Vol.1 • No.1


Virtual Reality Unifying Technology

• Synchronousness: In the same room we can banter back and forth with no delay. As we move across the spectrum, we gain more of the characteristics of being together, and communication becomes more natural and interactive. Moving from a letter to a text message, for example, makes the experience more synchronous. Going from a text to a phone call makes the interaction close to synchronous and adds tone of voice, an important nonverbal cue. Adding video yields facial expressions and some body language. The chasm between a video call and being in the same room together is vast, however. We’re missing eye contact, gestures, and any sense of personal space, and we have very limited interactivity. Video chat, the most technologically advanced electronic communication medium available to most people, falls dramatically short of the experience of being in the same room together. Group communications are particularly challenging. For example, five people in a conference room can have two different conversations simultaneously. Five people on a video chat can definitely not. People in a conference room can draw on a whiteboard, play a game of cards, or play catch. People on a video chat can sort of do some of these things. People in a room together can make eye contact; lack of eye contact is one of the most frustrating parts of video chat. Of course these electronic media are incredibly convenient to use. There is no need to sit in traffic or get on an airplane to video chat or voice call. You can instantly connect with almost anyone in the world with a marginal cost of close to zero. For those of us that can remember a time when “long distance charges” represented a substantial portion of our personal expenses, the freedom that comes from modern electronic communication has been revolutionary. On the spectrum of communications, VR is somewhere between video chat and being in the same room. In AltspaceVR, communication is more natural than on a video chat—we feel a sense of physical space, we can make eye contact and gesture at things to communicate nonverbally. We hear each other from where we are in the virtual room, making multiperson interactions easier. We can interact with things—physical objects, videos, images—giving us that context of communication that we have when we

146

technicacuriosa.com / Vol.1 • No.1


Virtual Reality Unifying Technology

are together. And now, with VR Call, we can easily connect instantly in VR with anyone around the world. People in the VR industry like to talk about “presence,” or the feeling that in VR, you are in another place. What we believe will be even more transformative is the feeling that you are with another person. Even with the very best video conferencing technology, it is extraordinarily rare for people to feel as if they are truly together, yet in VR this feeling is common. One of our users described meeting his long-distance girlfriend in VR: “Like a good night out or a solid vacation, VR gave us something to mutually experience and reflect upon. It was superior, in that respect, to video chatting. We had silly moments to laugh about, friends to reference, and a strange new world to ponder the ramifications of.”

Why VR will be Used Primarily for Communication

Like most people in this industry, I’ve read quite a few VR sci-fi books—it’s amazing to imagine a million people in an MMORPG (massively multiplayer online role-playing game), but I think it’s much more likely we’ll use VR to communicate with our friends and family. Taking it a step further, we think if VR will be used by hundreds of millions of people, then the primary use case for those people will be communication. If you haven’t had the experience of connecting in virtual reality with your friends, family, or loved ones, you really should do it. It’s dramatically different than any other way you communicate with them. The sense that the person you love is standing right there in front of you when they may be thousands of miles away is a feeling you will never forget. PE

About the Author Eric Romo is co-founder and CEO of AltspaceVR, a software company building the social platform for virtual reality. Prior to AltspaceVR, Eric was founder of GreenVolts, and was the 13th employee at SpaceX. Eric graduated from Cooper Union with a bachelor’s degree in mechanical engineering, and he earned a Master’s Degree in mechanical engineering and an MBA from Stanford University.

147

technicacuriosa.com / Vol.1 • No.1


Internet of Things

W

ireless communication based on the radiofrequency spectrum has revolutionized the way our societies work. After decades of evolution, it is now enabling us to build the Internet

of Things (IoT). Exploiting the visible light spectrum could provide equally disruptive effects. Thanks to the advancements in Visible Light Communications (VLC), LED lights can now be modulated at speeds comparable to radio-frequency technologies, making them competitive alternatives for wireless communication. Due to the high energy efficiency of LEDs, all light-emitting devices are rapidly becoming LED-based: car and city lights, billboards, home appliances, toys, etc. In the near future, we will have a new type of pervasive infrastructure to be networked: an Internet of

BY DR. QING WANG

148

technicacuriosa.com / Vol.1 • No.1


Internet of Things Visible Light Communications

Lights (IoL). IoL will integrate communication, sensors, and light, and create new pervasive smart environments for connected devices and objects—all centered around light as a communication medium. IoL/VLC is promising, but it faces a challenge: research in this area heavily depends on experiments, but there is no open platform. Building a platform from scratch requires significant effort, and thus, a large fraction of the research community is unable to explore VLC. To tackle this problem and realize the Internet of Lights, the initial step taken by the research team is to design and build the open-source VLC platform—OpenVLC—for fast prototyping of new system protocols and build a network of lights. To shed light on the uniqueness of the platform, some technical details are introduced here. The platform runs on a cost-effective yet powerful credit-card-sized em-

149

technicacuriosa.com / Vol.1 • No.1


Internet of Things Visible Light Communications

bedded board, the BeagleBone Black. The software-defined approach of the platform allows for easy reconfigurability of the system according to the application needs. OpenVLC is composed of three parts: 1 • An embedded system readily available on the market (BeagleBone Black, BBB). 2 • The OpenVLC hardware, i.e., the VLC transceiver (also called OpenVLC cape). 3 • The OpenVLC software, a softwaredefined PHY and MAC layer, implemented as a Linux driver. The OpenVLC hardware includes three optical components: a low-power LED, a high-power LED, and a photodiode. Several visible light communication channels can be enabled, such as the transmission from a high-power LED to a photodiode and the transmission from a low-power LED to another low-power LED, acting as receiver. The OpenVLC software is implemented as a Linux driver that can communicate directly with the OpenVLC hardware and the Linux networking stack. In OpenVLC, the VLC interface is set up as a new communication interface that can take advantage of the vast range of Linux tools. The most recent driver can be downloaded from Github. OpenVLC has already been used for research and teaching in more than 20 top universities and institutes, including the ETH, EPFL, and Rice University, among others. Several companies, distinguished professors, and researchers are using OpenVLC in their product prototyping, research labs, and courses. Additionally, it has received a very high level of attention in the recent demonstration sessions at ACM MobiCom 2015 and IEEE/ACM IPSN 2016, and it won the first prize in the 1st China (Shenzhen) Innovation Competition of International Talents—Munich Division (1/155). OpenVLC can be harnessed in vehicle communications for traffic control

150

technicacuriosa.com / Vol.1 • No.1

BeagleBone Black (BBB)

OpenVLC Cape plugged into the BBB.


Internet of Things Visible Light Communications

APPLICATION TCP/IP VLC MAC VLC PHY

Configuration

user space kernel space

VLC SUPPORTING LIBRARY

To / From Internet Layer Encoding / Decoding ADC & LED Operations TX / RX Switch...

and safety applications, both in the frameworks of vehicle-to-vehicle and vehicle-to-road communications. It can be easily integrated into vehicles through the onboard computer and the front and rear lights that are permanently on in most new vehicles. This can help with traffic management, can inform about relevant road information, and can increase safety. For instance, in the case of public administration, they can know if the car has passed a vehicle inspection, know the vehicle’s velocity, its distance between vehicles, and verify that the insurance in order, among other application scenarios. These applications could also take advantage of sensory data integrated into the vehicles, such as onboard diagnostics (OBD) technology. Currently there is an open call to make available the OpenVLC for selected applicants with an exciting project idea. PE

About the research group The OpenVLC project is co-led by Prof. Domenico Giustiniano at the IMDEA Networks Institute, and by Dr. Qing Wang of the TU Delft.

151

technicacuriosa.com / Vol.1 • No.1

OpenVLC driver implementation in the software stack of an embedded Linux system.

An application illustration: vehicle-to-vehicle communication using OpenVLC


Internet of Things

T

he Internet of Things—IoT for short—is here to stay and to change our world for the better. This grand vision depicts a world where people, buildings, and physical objects are connected to a single and common network. In this world, bottles of soda, lighting systems, cars, and everything in between will provide services and exchange data with each other. You might have noticed that the Internet of Things feels very much like an Intranet of Things: to interact with 10 different devices from your phone, you have to install 10 different apps. The problem is that there’s not a single “lingua franca” spoken by each and every object—there are literally hundreds!

Hello, World Wide Web of Things By Dominique Guinard and Vlad Trifa

An Architecture for the Web of Things 152

technicacuriosa.com / Vol.1 • No.1


Internet of Things Web of Things

The worst part is that most of these IoT protocols and standards aren’t compatible with each other, and for this reason the IoT hasn’t (yet!) delivered on its promises. Connecting every thing to the internet and giving them each an IP address is only the first step toward realizing the Internet of Things. Things might easily exchange data with one another, but they won’t necessarily understand what that data means. This is what web protocols like HTTP brought to the internet: a universal way to describe images, text, and other media elements so that machines could “understand” each. The Web of Things—or WoT—is simply the next stage in this evolution: using and

153

technicacuriosa.com / Vol.1 • No.1

The IoT versus the WoT.


Internet of Things Web of Things

adapting web protocols to connect anything in the physical world and give it a presence on the World Wide Web. Just as the OSI layered architecture organizes the many protocols and standards of the internet, the WoT architecture is an attempt to structure the galaxy of web protocols and tools into a useful framework for connecting any device or object to the web. The WoT architecture stack is not composed of layers in the strict sense, but rather of levels that add extra functionality, as shown in the figure to the right. Each layer helps to integrate things to the web even more intimately, hence making those devices more accessible for applications and humans alike. To illustrate what these layers bring to the IoT table, let us introduce WoT Pi, a Raspberry Pi-based device running at EVRYTHNG in London. The WoT Pi is connected with a bunch of sensors (e.g., temperature, humidity) and actuators (e.g., an LCD screen, LEDs) that you can interact with across the internet. An internet-connected camera allows you to see the setup live. Check it out here: http://devices.webofthings.io/camera/sensors/picture/.

Layer 1: Access This layer is responsible for turning any Thing into a Web Thing that can be interacted with using HTTP requests just like any other resource on the web. In other words, a Web Thing is a REST API that allows it to interact with something in the real world, like opening a door or reading a temperature

154

technicacuriosa.com / Vol.1 • No.1

The WoT architecture and its four application layers on top of the network layer.


Internet of Things Web of Things

sensor located across the planet. To illustrate this, the sensors of our Pi can be accessed via a simple HTTP request on the following URL: http://devices.webofthings.io/pi/sensors/. Go ahead and try this in your browser. You’ll get a human-friendly HTML representation with links to the sensors. Click on “temperature” and you’ll get the temperature. What you are doing here is navigating the RESTful API of our Pi, just like you would if browsing a web page. IoT Things can be mapped to REST resources quite easily, as shown in the figure on the next page. HTML is great for humans, but not always for machines who prefer the JSON notation. Our Pi provides both. Run the following command in your terminal using cURL, a tool for communicating with HTTP APIs: curl -X GET -H “Accept: application/json” “http://devices.webofthings.io/pi/sensors/humidity/” You will see the humidity level in our London office in JSON in your terminal. This is the ideal first step to building your first application that expands the web into the real world!

155

technicacuriosa.com / Vol.1 • No.1


Internet of Things Web of Things

REST resources of our connected Raspberry Pi.

This is all good, but many IoT scenarios are real-time and/or event-driven. Instead of your application continuously asking for data from our Pi, you want it to get notified when something happens in the real world—for example, when the humidity reaches a certain threshold or a noise is detected when something goes bump in the night. This is where another web protocol can help: WebSocket. The Javascript code below is enough for a web page to automatically get temperature updates from the WoT Pi. Paste it into the console of your web browser, and you will see our Pi pushing the temperature every second to your browser. var socket = new WebSocket(‘ws://devices.webofthings.io/pi/sensors/temperature/’); socket.onmessage = function (event) { //Called when a message is received var result = JSON.parse(event.data); console.log(result); };

Layer 2: Find Making things accessible via an HTTP and WebSocket API is great, but it doesn’t mean applications can really “understand” what the Thing is, what data or services it offers, and so on.

156

technicacuriosa.com / Vol.1 • No.1


Internet of Things Web of Things

This is where the second layer, Find, becomes interesting. This layer ensures that your Thing cannot only be easily used by other HTTP clients, but can also be findable and automatically usable by other WoT applications. The approach here is to reuse web semantic standards to describe things and their services. This enables searching for things through search engines and other web indexes, as well as the automatic generation of user interfaces or tools to interact with Things. At this level, technologies such as JSON-LD are in use: a language for semantically annotating JSON. This is also where standards such as the Web Things Model and the work of the W3C WoT group help: they define an abstract set of REST resources that Things should offer.

Layer 3: Share The Internet of Things will only blossom if Things have a way to securely share data across services. This is the responsibility of the Share layer, which specifies how the data generated by Things can be shared in an efficient and secure manner over the web. At this level, another batch of web protocols help. First, there’s TLS, the protocol that makes transactions on the web secure. Then, techniques such as delegated web authentication mechanisms like OAuth can be integrated to our Things’ APIs. Finally, we can also use social networks to share Things and their resources to create a Social Web of Things!

Layer 4: Compose Finally, once Things are on the Web (layer 1) where they can be found by humans and machines (layer 2), and their resources can be shared A physical mashup with Node-RED. When an intruder is detected via the PIR sensor, a picture is taken and sent to Twitter.

157

technicacuriosa.com / Vol.1 • No.1


Internet of Things Web of Things

securely with others (layer 3), it’s time to look at how to build large-scale, meaningful applications for the Web of Things. In other words, we need to understand the integration of data and services from heterogeneous Things into an immense ecosystem of web tools such as analytics software and mashup platforms. Web tools at the Compose layer range from web toolkits—for example, JavaScript SDKs offering higher-level abstractions—to dashboards with programmable widgets, and finally to physical mashup tools such as Node-RED as shown on the previous page. Inspired by Web 2.0 participatory services and in particular web mashups, physical mashups offer a unified view of the classical web and Web of Things and empower people to build applications using data and services from Web Things without requiring programming skills.

Conclusion The Web of Things is a high-level application protocol designed to maximize interoperability in the IoT. We hope this short introduction gave you a taste of its potential. Web technologies are widely popular and offer all the flexibility and features needed for the majority of future IoT applications, inBUY cluding discovery, security, and real-time messaging. While we only flew over the ideas of the WoT, we hope this has sparked your interest in making IoT Things more accessible, with thanks to the web. The Web of Things architecture is fully described in our book, Building the Web of Things. The book is also packed with code examples on the Raspberry Pi using the Node.js language. The code is open source and freely available from: https://github.com/webofthings/wot-book. PE

158

technicacuriosa.com / Vol.1 • No.1



Internet of Things

ST Provides Tools Needed to Implement Recommendations of FBI and Other Agencies

T

he Internet of Things represents the next stage of the web’s evolution. It is rapidly bringing to familiar devices—kitchen appliances, home cameras, residential security systems and the like—the same interconnectivity that computer users have been enjoying for many years. This connectivity is bringing an exciting new level of convenience and efficiency; imagine being able to look in on the kids via your smartphone from the office, check the contents of the fridge before you leave work to see what you might need from the store; or adjust the temperature of your home to suit your comfort. Unfortunately, with its increasing adoption, we’re learning that the Internet of Things also represents the next generation of security

160

technicacuriosa.com / Vol.1 • No.1


Internet of Things Defending IoT Attacks

threats. By definition, every IoT device needs some amount of computing capability; it also needs to be connected to the net. Hackers have been able to take advantage of these two attributes to turn some IoT devices into foot soldiers in the latest wave of malware attacks. The most dramatic example of this occurred in October, when the sites for CNN, Netflix, and Twitter, among 80 million others, went offline. It emerged that they had been the subject of a Denial of Service attack on a scale never before seen. This occurred when hackers took advantage of the fact that many IoT devices come equipped with default passwords that most users never change. Find such a device, which is easy to do with automated web scanning tools, and you can log in, take control, and get the device to do almost anything you want it to. In the case of the October attack, the hackers installed a piece of malevolent software known as “Mirai” that took over control of the device and added them to a collection of tens of thousands similarly infected units, all part of a “botnet” that could be controlled by a single user. The Mirai botnet flooded the web with terabytes of useless data. It was so much data that many sites were unable to

161

technicacuriosa.com / Vol.1 • No.1

maintain normal operations, at least not until engineers concluded a massive clean-up operation. Webcams are ideal for this sort of data-intensive tsunami because of the high bit rates associated with video. The October attack, while inconveniencing countless users, had two useful side effects. First, it demonstrated to IoT vendors that they could no longer consider security an afterthought and had to be mindful of it from the moment they first begin to design their products, in the same way software makers learned to do long ago. Second, it attracted the attention of law enforcement, which quickly sprung into action with guidelines intended to prevent a repeat of this new breed of IoT Denial of Service outages. After Mirai, the FBI, the Department of Homeland Security, and the National Institute of Standards and Technology (NIST) all weighed in with a checklist of best practices to keep the IoT safe. The variety and breadth of IoT applications and devices, and their range of constraints, suggests several security implementations of those best practices. The best practices recommend considering security threats in designing the IOT device to use the most relevant and cost-effective approach.


Internet of Things Defending IoT Attacks

These recommendations can be implemented by vendors via the suite of security-related products from STMicroelectronics and others. For example, ST’s new STSAFE-A100 is essentially a design plug-in that secures communications and provides built-in authentication, key provisioning, and CC EAL5+ level security standards for IoT devices. The responses from government agencies to the recent attacks were aimed at increasing vendors’ and IoT developers’ awareness of the nature of the IoT security threat. The FBI, for example, issued a Private Industry Notification that brought readers up-to-date on the crisis. It noted that computer security researchers have recently identified “dozens of new malware variants targeting Linux operating systems,” with those systems being a target because of their popularity in the IoT world. “Most of the Linux malware variants scan the Internet for IoT devices that accept Telnet, which is used to log into a device remotely, and try to connect to vulnerable devices by using brute force attacks with common default login credentials.” Once that is accomplished, the FBI said, the computer can be added to a “botnet” collection of similarly hacked devices, which troublemakers can then use to target websites. The attacks are likely to continue, the Bureau said, “due to the open availability of the malware source codes for targeting IoT devices

162

technicacuriosa.com / Vol.1 • No.1

and insufficient IoT device security.” There are no confirmed suspects in the attacks, the FBI reported. The FBI issued a number of recommendations, including that websites develop a mitigation strategy ahead of time. For IOT device owners, it recommended disabling some of the features used by the malware for the initial infection, especially the “Universal Plug and Play” and “remote management” options in a router. It also suggested that the ports used as vectors of the recent attacks—23 and 2323—be either filtered or blocked by the system’s firewall. For its part, the Department of Homeland Security issued a set of Six Strategic Principles for Securing the Internet of Things. The DHS paper told much the same story as the FBI’s and was aimed mainly at vendors. The first principle was for IoT vendors to incorporate security at the design phase and suggested including hardware-based security in the design to strengthen the protection and the integrity of the device. That is exactly what the ST’s STSAFETPM Trusted Platform Module (TPM) from STMicroelectronics does. The STSAFE-TPM is a secure chip compliant with ISO/IEC 11889:2015 international and open standard developed by the Trusted Computing Group (TCG) organization. The notion that security needs to be made paramount was a theme in the


Internet of Things Defending IoT Attacks

other five principles of the DHS paper as well. They included recommendations that companies build on proven security practices, and that connections be made to the internet “carefully and deliberately.” The NIST paper, “Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems,” was, at 242 pages, the longest and most technical of the lot. While released at the time of the Mirai news, the document had actually been in the works at the agency for four years. It

describes some of the specific steps that engineers need to take to make sure that their IoT devices are as secure as possible and thus less susceptible

163

technicacuriosa.com / Vol.1 • No.1

to malware attacks. Or, in the words of the report, it’s a guide to “the engineering-driven perspective and actions necessary to develop more defensible and survivable systems, inclusive of the machine, physical, and human components that compose the systems and the capabilities and services delivered by those systems.” A common theme in all three reports is that IoT vendors need to be vastly more mindful of security-related issues as they design their products. But owing to skill-set issues, this has been easier said than done. For all of the engineering talent in-house at IoT companies selling, for example, webcams, little of it is typically in the domain of security. That is where vendors like ST have begun to jump into the fray, playing an important role by providing turnkey security-related products like its STSAFE lines that allow OEMS to tackle the user features that represent their actual strength. And with a broad range of useful development tools, built around the powerful STM32 Nucleo development platform, prototyping, testing, and even testing new applications for security vulnerabilities is easier than ever. These efforts need to ensure a safer and more secure IoT. PE


Electronics 101

M

By Dr. Saleh Faruque 164

technicacuriosa.com / Vol.1 • No.1

odulation is a technique that changes the characteristics of the carrier frequency in accordance to the input signal [1, 2]. Figure 1 shows the conceptual block diagram of a modern wireless communication system. As shown in the figure, modulation is performed at the transmit side and demodulation is performed at the receive side. This is the final stage of any radio


Electronics 101 RF Modulation

Figure 1. Block diagram of a modern full-duplex communication system.

Source Coding

Channel Coding

Modulation + Power Amp. Duplexer (Diplexer)

Source Channel Decoding Decoding

Demodulation + LNA

communication system. The preceding two stages have been discussed elaborately in my previous book in this series [3, 4]. The output signal of the modulator, referred to as the modulated signal, is fed into the antenna for propagation. The antenna is a reciprocal device that transmits and receives the modulated carrier frequency. The size of the antenna depends on the wavelength (λ) of the sinusoidal wave where λ = c/f, meter, c = velocity of light = 3×108 m/s, f = frequency of the sinusoidal wave, also known as “Carrier Frequency.” Therefore, a carrier frequency much higher than the input signal is required to keep the size of the antenna at an acceptable limit. For these reasons, a high frequency carrier signal is used in the modulation process. In this process, the low frequency input signal changes the characteristics of the high frequency carrier in a certain manner, depending on the modulation technique. Furthermore, as the size and speed of digital data networks continue to expand, bandwidth efficiency becomes increasingly important. This is especially true for broadband communication, where the digital signal processing is done keeping in mind the available bandwidth resources. Hence, modulation is a very important step in the transmission of information. The information can be either analog or digital, where the carrier is a high frequency sinusoidal waveform. As stated earlier, the input signal (analog or digital) changes the characteristics of the carrier waveform. There are two basic modulation schemes: • Modulation by analog signals • Modulation by digital signals

Modulation by Analog Signals

For analog signals, there are three well-known modulation techniques:

165

technicacuriosa.com / Vol.1 • No.1


Electronics 101 RF Modulation

• Amplitude Modulation (AM) • Frequency Modulation (FM) • Phase Modulation (PM) Each of these modulation techniques is briefly presented below with illustrations.

Amplitude Modulation (AM)

Voltage

In Amplitude Modulation (AM), as shown in Figure 2, the audio waveform changes the amplitude of the carrier to determine the envelope of the modulated carrier. This enables the receiver to extract the audio signal by demodulation. Notice that the amplitude of the carrier changes in accordance with the input signal, while the frequency of the carrier does not change after modulation. It can be shown that the modulated carrier S(t) contains several spectral components, requiring frequency domain analysis. It may be noted that AM is vulnerable to signal-amplitude-fading.

Input Modulating Signal Time

Carrier Frequency AM Signal Figure 2. Modulation by analog signal: amplitude, frequency, and phase modulation.

166

technicacuriosa.com / Vol.1 • No.1

FM Signal PM Signal


Electronics 101 RF Modulation

Frequency Modulation (FM) In Frequency Modulation (FM), the frequency of the carrier changes in accordance with the input modulation signal as shown in Figure 2 [1, 7]. Notice that in FM, only the frequency changes while the amplitude remains the same. Unlike AM, FM is more robust against signal-amplitude-fading. For this reason FM is more attractive in commercial FM radio. It can be shown that in FM, the modulated carrier contains an infinite number of sideband due to modulation [1]. For this reason, FM is also bandwidth inefficient.

Phase Modulation (PM) Similarly, in Phase Modulation (PM), the phase of the carrier changes in accordance with the phase of the carrier, while the amplitude of the carrier does not change. PM is closely related to FM. In fact, FM is derived from the rate of change of phase of the carrier frequency. Both FM and PM belong to the same mathematical family [1].

AM and FM Bandwidth at a Glance The bandwidth occupied by the modulated signal depends on bandwidth of the input signal and the modulation method as shown in Figure 3 (next page). Note that the unmodulated carrier itself has zero bandwidth. In AM: • The modulated carrier has two sidebands (upper & lower) • Total bandwidth = 2 x base band In FM: • The carrier frequency shifts back and forth from the nominal frequency by Δf, where Δf is the frequency deviation. • During this process, the modulated carrier creates an infinite number of spectral components, where higher order spectral components are negligible. • The approximate FM bandwidth is given by the Carson’s Rule: • FM BW = 2f (1+ β) • f = Base band frequency • β = Modulation index • β = Δf/f • Δf = Frequency deviation

167

technicacuriosa.com / Vol.1 • No.1


Electronics 101 RF Modulation

Time-Domain

Frequency-Domain

(as viewed on a Spectrum Analyzer)

Voltage

(as viewed on an Oscilloscope)

Voltage

Figure 3. Bandwidth occupancy in AM, FM, and PM signals.

Time

0

Frequency

fc Lower Sideband

Upper Sideband

fc

fc

fc

Modulation by Digital Signal [1] For digital signals, there are several modulation techniques available. The three main digital modulation techniques are: • Amplitude Shift Keying (ASK) • Frequency Shift Keying (FSK) • Phase Shift Keying (PSK) Figure 4 illustrates the modulated waveforms for an input modulating digital signal. Brief descriptions of each of these digital modulation techniques along with the respective spectral responses and bandwidth are presented on the next page.

168

technicacuriosa.com / Vol.1 • No.1


Figure 4. Modulation by digital signal: ASK, FSK, and PSK.

Voltage

Electronics 101 RF Modulation

Time

Input Modulating Digital Signal Carrier Frequency

ASK Signal

FSK Signal

PSK Signal

Amplitude Shift Keying (ASK) Modulation Amplitude shift keying (ASK), also known as On-Off-Keying (OOK), is a method of digital modulation that utilizes amplitude shifting of the relative amplitude of the carrier frequency [1,8,10]. The signal to be modulated and transmitted is binary; this is referred to as ASK, where the amplitude of the carrier changes in discrete levels, in accordance to the input signal, as shown below: • Binary 0 (Bit 0): Amplitude = Low • Binary 1 (Bit 1): Amplitude = High Figure 4 shows the ASK modulated waveform, where • Input digital signal is the information we want to transmit. • Carrier is the radio frequency without modulation.

169

technicacuriosa.com / Vol.1 • No.1


Electronics 101 RF Modulation

• Output is the ASK modulated carrier, which has two amplitudes corresponding to the binary input signal. For binary signal 1, the carrier is ON. For the binary signal 0, the carrier is OFF. However, a small residual signal may remain due to noise, interference, etc.

Frequency Shift Keying (FSK) Modulation Frequency shift keying (FSK) is a method of digital modulation that utilizes frequency shifting of the relative frequency content of the signal [1, 8, 10]. The signal to be modulated and transmitted is binary; this is referred to as binary FSK (BFSK), where the carrier frequency changes in discrete levels, in accordance with the input signal given below: • Binary 0 (Bit 0): Frequency = f + Δf • Binary 1 (Bit 1): Frequency = f – Δf Figure 4 shows the FSK modulated waveform, where • Input digital signal is the information we want to transmit. • Carrier is the radio frequency without modulation. • Output is the FSK modulated carrier, which has two frequencies ω1 and ω2, corresponding to the binary input signal. • These frequencies correspond to the messages binary 0 and 1, respectively.

Phase Shift Keying (PSK) Modulation Phase shift keying (PSK) is a method of digital modulation that utilizes phase of the carrier to represent digital signal [1, 8, 10]. The signal to be modulated and transmitted is binary; this is referred to as binary PSK (BPSK), where the phase of the carrier changes in discrete levels, in accordance with the input signal as shown below: • Binary 0 (Bit 0): Phase1 = 0 deg. • Binary 1 (Bit 1): Phase2 = 180 deg. Figure 4 shows the modulated waveform, where • Input digital signal is the information we want to transmit. • Carrier is the radio frequency without modulation. • Output is the BPSK modulated carrier, which has two phases φ1and φ2 corresponding to the two information bits.

170

technicacuriosa.com / Vol.1 • No.1


Electronics 101 RF Modulation

Bandwidth Occupancy in Digital Modulation ASK Bandwidth at a Glance In ASK, the amplitude of the carrier frequency changes in discrete levels, in accordance with the input signal. Since the input is a digital signal and it contains an infinite number of harmonically related sinusoidal waveforms (according to Fourier transform), and that we keep the fundamental and filter out the higher order components, the bandwidth will be determined by the spectral response of the input modulating signal, centered around the carrier frequency. This is shown in Figure 5. Notice that the spectral response after ASK modulation is the shifted version of the NRZ data. Bandwidth is given by: BW = 2Rb Hz where Rb is the input bit rate.

Figure 5. ASK bandwidth at a glance. (a) Spectral response of NRZ data before modulation. (b) Spectral response of the carrier before modulation. (c) Spectral response of the carrier after modulation. The transmission bandwidth is 2fb, where fb is the bit rate and T = 1/fb is the bit duration for NRZ data.

171

Spectral Response

T

Voltage

Data

P

fb = 1/T

(a) -f

Time

-fb 0 fb

f

BW = 2fb

Carrier Frequency Before Modulation

P

(b) -f

0

fc

f

Carrier Frequency After ASK Modulation P

(c) -f

0

fc

f

BW = 2fb

FSK Bandwidth at a Glance In FSK, the frequency of the carrier changes in two discrete levels, in accordance to the input signals. This is depicted in Figure 6. Notice that the carrier frequency after FSK modulation varies back and forth from the nominal

technicacuriosa.com / Vol.1 • No.1


Electronics 101 RF Modulation

frequency fc by + Δfc, where Δfc is the frequency deviation. The FSK bandwidth is given by: BW = 2fb(1+Δfc/fb) = 2fb(1+β) Hz where β = Δf/fb is known as the modulation index and fb is the coded bit frequency (bit rate Rb). Data

Spectral Response

T

Voltage

Figure 6. FSK bandwidth at a glance. (a) Spectral response of NRZ data before modulation. (b) Spectral response of the carrier before modulation. (c) Spectral response of the carrier after modulation. The transmission bandwidth is 2(fb+Δfc). fb is the bit rate and Δfc is the frequency deviation = 1/fb is the bit duration for NRZ data.

P

fb = 1/T

(a) -f

Time

-fb 0 fb

f

BW = 2fb

Carrier Frequency Before Modulation

P

(b) -f

fc

0

f

Carrier Frequency After ASK Modulation P

fb

Δfc

Δfc

fb

(c) -f

BPSK Bandwidth at a Glance

0

fc

f

BW = 2(fb +Δfc)

In BPSK, the phase of the carrier changes in two discrete levels, in accordance to the input signal. Figure 7 shows the spectral response of the BPSK modulator. Since there are two phases, the carrier frequency changes in two discrete levels, one bit per phase, as shown in the figure. Notice that the spectral response after BPSK modulation is the shifted version of the NRZ data, centered on the carrier frequency fc. The transmission bandwidth is given by: BW(BPSK) = 2Rb/Bit per Phase = 2Rb/1 = 2Rb Hz where • Rb is the bit rate (bit frequency). • For BPSK, φ = 2, one bit per phase. Also, notice that the BPSK bandwidth is the same as the one in ASK modula-

172

technicacuriosa.com / Vol.1 • No.1


Electronics 101 RF Modulation

Figure 7. BPSK bandwidth at a glance. (a) Spectral response of NRZ data before modulation. (b) Spectral response of the carrier before modulation. (c) Spectral response of the carrier after modulation.

Spectral Response

T

Voltage

Data

P

fb = 1/T

(a) -f

Time

-fb 0 fb

f

BW = 2fb

Carrier Frequency Before Modulation

P

(b) -f

0

fc

f

Carrier Frequency After Modulation P

(c) -f

0

fc

f

BW = 2fb tion. This is due to the fact that the phase of the carrier changes in two discrete levels, while the frequency remains the same.

Conclusions

This article provides a brief overview of modulation techniques described in Radio Frequency Modulation Made Easy (Springer Briefs in Electrical and Computer Engineering) [1]. Numerous illustrations are used to bring readers up-to-date in key concepts, underlying principles, and practical applications of various analog and digital modulation techniques. In particular, the following topics are briefly presented in this article: • Amplitude Modulation (AM) • Frequency Modulation (FM) • Amplitude Shift Keying (ASK) • Frequency Shift Keying (FSK) • Phase Shift Keying (PSK) • Bandwidth Occupancy in each of these modulation techniques.

References 1 Faruque, Saleh, Radio Frequency Modulation Made Easy (Springer Briefs in Electrical and Computer

173

technicacuriosa.com / Vol.1 • No.1


Electronics 101 RF Modulation

Engineering), 2016, 1st Ed. ISBN-13: 978-3319412009, ISBN-10: 3319412000. 2 Faruque, Saleh, Radio Frequency Propagation Made Easy (Springer Briefs in Electrical and Computer Engineering), 1st Ed. ISBN-13: 978-3319113937, ISBN-10: 3319113933. 3 Faruque, Saleh, Radio Frequency Source Coding Made Easy (Springer Briefs in Electrical and Computer Engineering), 2015, 1st Ed. ISBN-13: 978-3319156088, ISBN-10: 331915608X. 4 Faruque, Saleh, Radio Frequency Channel Coding Made Easy (Springer Briefs in Electrical and Computer Engineering), 2016,1st Ed. ISBN-13: 978-3319211695, ISBN-10: 3319211692. 5 Marconi, Guglielmo, British patent No. 12,039 (1897) Improvements in Transmitting Electrical impulses and

Signals, and in Apparatus therefor. Date of Application 2 June 1896; Complete Specification Left, 2 March 1897; Accepted, 2 July 1897. 6 Marconi, Guglielmo, British patent No. 7,777 (1900) Improvements in Apparatus for Wireless Telegraphy. Date of Application 26 April 1900; Complete Specification Left, 25 February 1901; Accepted, 13 April 1901. 7 Armstrong, E. H. (May 1936). A Method of Reducing Disturbances in Radio Signaling by a System of Frequency

Modulation. Proceedings of the IRE (IRE) 24 (5): 689–740. doi:1109/JRPROC.1936.227383. 8 Smith, David R., Digital Transmission Systems, Van Nostrand Reinhold Co. ISBN: 0442009178, 1985. 9 Couch, Leon W. II , Digital and Analog Communication Systems, 7th Prentice-Hall, Inc., Englewood Cliffs, NJ, 2001. ISBN: 0-13-142492-0. 10 Sklar, Bernard, Digital Communications Fundamentals and Applications, Prentice Hall, 1988. 11 Fourier, J. B. Joseph; Freeman, Alexander, translator (1878), The Analytical Theory of Heat, The University Press.

Author’s note: This article is a brief overview of various modulation techniques covered in the book Radio Frequency Modulation Made Easy. If my readers are benefited by this contribution, I shall be amply rewarded. PE

About the Author

BUY

174

Saleh Faruque is a professor in the Department of Electrical Engineering at University of North Dakota. This article draws from his book Radio Frequency Modulation Made Easy (Springer Briefs in Electrical and Computer Engineering), 2016, 1st Ed. ISBN-13: 9783319412009, ISBN-10: 3319412000. This is the fourth volume in a series composed of six books in wireless communications. These books are derived from Dr. Faruque’s class lectures in communications engineering (EE411) and wireless communications (EE512), and are intended as supplements to recommended texts for engineering students and professionals in wireless communications. The book includes numerous illustrations to bring readers up-to-date in key concepts, underlying principles, and practical applications of various analog and digital modulation techniques.

technicacuriosa.com / Vol.1 • No.1


Electronics 101

Publisher’s Note: To complement Dr. Saleh Faruque’s feature story providing an introduction to RF modulation, as well as Casey Spencer’s feature, Teslapathic, which employs

Pulse Modulation October 196O

BY HERBERT KONDO

pulse position modulation (PPM), we dug into our archives at Popular Electronics and discovered this terrific introduction to pulse modulation, of which PPM is a variant. In this article, author Herbert Kondo addresses the basics of each type of pulse modulation, as well as their applications. The context of his time, October 1960, adds a fascinating spin.

175

technicacuriosa.com / Vol.1 • No.1

This Exciting Method of Communication is reaching out beyond the Frontiers of Space From Satellite 1959-delta the message came loud and clear: a huge belt of electrons circles the planet Earth thousands of miles out in space. Our 1959-delta had further jolting news: the outer Van Allen radiation belt, once thought to expand after a solar eruption, actually shrinks. Even more striking was the news that there is a huge interplanetary “atom smasher” centered about the sun. Satellite 1959-delta, commonly known as “Explorer VI,” had a lot more to say. But how it said it is just as interesting as what it said. A great deal of Explorer VI’s information was sent by a five-watt transmitter that used pulse modulation, the most sophisticated modulation system known today. So important is this new communications system that it is already used for telegraphy, radar, multi-channel microwave transmission, and telemetry, as well as space communications.


Electronics 101 PE CLASSICS: Pulse Modulation

Basic Theory

Figure 1. Original signal amplitude of pulses (A) is affected by noise in transmission (B). Electronic dipping restores original signal (C).

176

The idea of pulse modulation has been around a long time. In telegraphy, the familiar “dots and dashes” of the Morse code are pulses produced with a switch or key. Ham operators have long been using a form of pulse modulation when they key their high-frequency transmitters to send out pulses of electromagnetic energy in code. Television servicemen come across a form of pulse modulation in the gated-beam tube. The principle behind the pulse modulation system is actually ridiculously simple: information is impressed on a train of pulses instead of directly on a continuous-wave carrier. But if it’s as simple as that, why all the excitement about it? What does pulse modulation have that more familiar forms of modulation—AM and FM—don’t have? For one thing, pulse modulation offers practically noise-free transmission and reception—even more so than FM. To visualize this concept, let’s consider a train of ideal pulse—pulses with vertical sides, as shown in (A) of Figure 1. Noise is picked up during transmission, resulting in the waveshape shown in (B). With suitable clipping and limiting circuits, we can reproduce only that part of the pulse signal between the dotted lines, as shown in (C). Having done this, we can then re-transmit this new signal free of noise. Pulse modulation has another outstanding advantage. It uses transmitter energy more efficiently than either AM or FM because of the simple “on-off” nature of the pulses. This means that a pulse transmitter will have a longer range than an AM transmitter of the same power. All pulse modulation systems boil down to two basic principles: (1) A message signal modulates a train of pulses which are applied to a subcarrier. (2) The subcarrier then modulates a high-frequency carrier. The relation of a subcarrier to a carrier can be made clear by an analogy. Let’s suppose that there are five messenger boys on the same subway train in New York City. Each boy is carrying a message to a different destination (receiver). If we think of the subway as the carrier,

technicacuriosa.com / Vol.1 • No.1


Electronics 101 PE CLASSICS: Pulse Modulation

then each messenger boy is a subcarrier. The message each boy carries is the modulated signal.

Sampling The most important idea in pulse modulation is sampling, a concept which we come across almost every day. For example, if you’ve never heard a stereophonic recording, you can listen to a “stereo sample” record and get a good idea of what stereo is like. Another widely known use of sampling is the public-opinion poll, which bases its findings on selective sampling techniques. If we want to transmit a conversation by pulse modulation, we take samples of the conversation—thousands of samples each second—and then transmit them in the same order in which they were spoken. Each pulse is actually a single sample; its height, width, or position indicates the instantaneous value of the sound sent. For good reproduction, it has been shown that the number of samples per Figure 2. second must be greater than twice the highest frequency of the signal we Information contained wish to send. Thus, if the highest frequency in a telephone conversation is in the modulating 4000 cps, we must take at least 8000 samples each second.

signal in (A) is shown as it would appear using the various pulse transmission methods (B through F). Binary numbers corresponding to signal amplitudes can be transmitted in the PCM system.

177

Types of Modulation Another basic concept in pulse modulation is the modulation itself. When we modulate a carrier wave, we ordinarily alter its amplitude (AM), its frequency (FM), or its phase (PM). The nice thing about a pulse is that there’s another characteristic we can use for modulation, namely, time. If we alter the timing of the pulses, we are effectively changing their position relative to one another—this is actually done in pulse position modulation (PPM). In pulse width modulation (PWM), we alter the width of the pulses; in pulse frequency modulation (PFM), the frequency of the pulse changes. We can also alter the amplitude of the pulses to produce pulse amplitude modulation (PAM). And we can even code the pulses, as is done in pulse code modulation (PCM). Let’s take a closer look at all of these pulse modulation techniques and find out how a sine

technicacuriosa.com / Vol.1 • No.1


Electronics 101 PE CLASSICS: Pulse Modulation

wave—see Figure 2(A)—is transmitted in each system. Later, we’ll see how pulse width modulation and pulse code modulation are used in transmissions from satellites and in multi-channel telephone communications.

PPM Pulse position modulation, widely used in radar and in microwave relays, depends on a modulating signal varying the position of the pulses. A separate generator produces a series of marker pulses which act as reference points. With PPM, the relative position of the signal pulse and the marker pulse are important, as shown in Figure 2(B).

PWM In pulse width modulation, the width or duration of the pulses varies directly in accordance with the modulating signal, as shown in Figure 2(C). Also known as pulse duration modulation (PDM), PWM varies either the leading or the trailing edges, or perhaps even both edges, of the pulses. For example, if the leading edges of the pulses were spaced at equal time intervals, the trailing edges could then be varied (displaced in time) in accordance with the amplitude of the modulating signal. Since pulse width modulation requires relatively simple circuitry, it is the ideal type of pulse modulation for use in outer space vehicles.

PFM Pulse frequency modulation is somewhat similar to ordinary FM, except that the basic carrier consists of equally spaced pulses rather than a sine wave. The occurrence of the pulses varies with the amplitude of the modulating signal, as in Figure 2(D).

PAM In pulse amplitude modulation, the height of the pulses varies directly in accordance with the modulating signal, much like the amplitude modulation of a continuous-wave (c.w.) carrier. In Figure 2(E), the positive-going portion of a sine wave increases the height of the pulse train, while the negative-going portion of the signal decreases the height. Pulse code modulation uses the presence or absence of a pulse to convey information. In the sample shown in Figure 2(F), the code makes use of a

178

technicacuriosa.com / Vol.1 • No.1


Electronics 101 PE CLASSICS: Pulse Modulation

group of four positions, which may be “filled” with either a pulse or a space (absence of a pulse).

PWM in Outer Space

Figure 3. Satellites can send a number of messages over a single transmitter by sampling each signal with a rotating commutator, then converting the sampled information to PWM signals for transmission to Earth.

179

If we were to make a block diagram of the telemetry system used in the Vanguard rocket, it would break down into the five simple blocks shown in Figure 3. (See “Telemetering—Vital Link to the Stars,” in the November 1959 issue of Popular Electronics for a complete discussion of telemetry.) In Figure 3, a rotating sampling switch—called a commutator—samples a number of contacts which are connected to devices that measure outer space data (cosmic and ultraviolet rays, X-rays, etc.). Information from the contacts is then sent to the keyer which triggers a one-shot multivibrator (itself a special type of PWM generator). With this arrangement, the multivibrator produces pulse signals whose width varies in accordance with the information (voltage) supplied to it by the commutator and keyer. The PWM signals are fed to the oscillator, which modulates the transmitter that sends satellite performance information to earthbound receiving stations. “Explorer I,” which discovered the Van Allen radiation belt, also used pulse width modulation. The initial output of the cosmic ray channel, which carried the Van Allen radiation information, was a pulse width signal which then frequency-modulated a subcarrier oscillator. The subcarrier, in turn, phase-modulated the carrier of the satellite’s transmitter. This rather complex sequence of modulation techniques also occurred on the cosmic dust transmissions from Explorer 1.

PCM in Communications Of all forms of pulse modulation, the most exciting is pulse code modulation. Says a one-time Bell Telephone Laboratories scientist: “It’s the most so-

technicacuriosa.com / Vol.1 • No.1


Electronics 101 PE CLASSICS: Pulse Modulation

phisticated communication technique around. It has the advantage of an extremely high signal-to-noise ratio, plus the added element of secrecy. PCM is statistical in nature, and it’s hard to jam any statistical communication system—the less predictable the system, the harder it is to design electronic countermeasures against it.” Suppose you bought a VTVM kit for $29.17, tax included. If a friend asked you how much you paid for it, you might tell him that it cost $30.00. Would you be lying? Not at all—you are perfectly justified in rounding off the numbers to the nearest easily remembered figure. People are doing this sort of thing all the time. The same technique is used in pulse code modulation. For example, if the amplitude of the signal we wish to send is 4.7 volts, PCM would send it through as 5 volts; if the signal amplitude is 2.37 volts, PCM would transmit it as 2 volts. This simplification is necessary because the signal has to be coded, and the code uses only whole numbers. Let’s suppose we want to send the signal shown in Figure 4 (A). Sampling pulses sense the amplitude of the signal to be transmitted. Pulse A, which has a value of 3.2 volts, is changed to an amplitude of 3 volts as shown in Figure 4(B). Pulse B, which has a value of 3.8 volts, is changed to an amplitude of 4 volts. This process of simplifying the original signal in terms of whole numbers is called quantizing the signal; the result is known as a quantized signal—see Figure 4(B). Once the signal is quantized, it must be coded for transmission (hence the name, pulse code modulation). For this, the binary code is used (see “The Language of Digital Computers,” Popular Electronics, January 1958, p. 68). Each quantized pulse representing the amplitude of the signal at a given point must be changed into a group of pulses in the PCM binary code. Always keep in mind this distinction between the quantized pulse and the pulse group: the quantized pulse is a sampling pulse, whose value will be determined by its amplitude; the pulse group represents the original signal in binary language. In a binary pulse group, only the presence or absence of a pulse has meaning. If the code is a three-pulse group, as shown in Figure 4(C), then the far-right position has a value of 1 if a pulse is present, or 0 if the pulse is absent. The middle position would have double the first position’s value, or 2, if a pulse were present, but would again have a value of 0 if there were no pulse. The far-left position would have double the value of the middle

180

technicacuriosa.com / Vol.1 • No.1


Electronics 101 PE CLASSICS: Pulse Modulation

position, or 4, if a pulse were present, but a value of 0 if no pulse were there. Suppose our quantized pulse has a value of 3. Then, in a three-pulse binary code, there would be a pulse in the far-right (1) and middle (2) positions only (1 + 2 = 3). If the quantized pulse has a value of 7, then all three pulses in the group would be needed (1 + 2 + 4 = 7). With a three-pulse binary group, we can send out the waveshape shown in Figure 4(B) using any of seven values. For greater “fidelity” in reproducing the waveshape, we would need a large number of samples, and larger binary pulse groups would be required. A five-pulse group, for example, gives 32 different amplitudes; a seven-pulse group gives 128 different amplitudes. The binary-coded signal is ultimately fed to an r.f. transmitter, which is turned alternately on and off by the binary pulses.

Multiplexing and PCM Bell Telephone Laboratories has many plans for pulse code modulation. For example, they envision a 24-voice-channel PCM telephone system which would allow 24 people to talk at the same time over a single line. If you’ve had any experience with present-day “party lines,” you know it’s impossible for two people to talk over the same line at the same time. How, then, can 24 people do it? The answer is multiplexing, a kind of sampling technique. The type used in telephony is time-division multiplexing. Let’s consider a case where six people are sharing a single telephone line. Three of them are talking in city A, and three are listening in city B. By means of a rotating commutator in city A, each speaker is rapidly hooked up to the line in succession. At the same time a second commutator in city B, Figure 4. In the PCM system, amplitude of actual signal (A) is sampled at regular intervals. The samples are rounded off to whole-number pulse amplitudes—a quantized signal (B)—and then converted to binary numbers. Binary code chart (C) gives decimal value of binary numbers.

181

technicacuriosa.com / Vol.1 • No.1


Electronics 101 PE CLASSICS: Pulse Modulation

synchronized with the commutator in city A, samples the line and distributes each speaker’s voice to the intended listener in city B. It’s possible to have as many as 176 simultaneous conversations over a single line using PCM. Multiplexing, incidentally, is the method used by earth satellites to transmit different types of information back to Earth. Instead of hooking up 24 talkers in sequence, we can hook up 24 transducers which give information about temperature, cosmic ray density, magnetic field strength, etc. Each transducer modulates a subcarrier oscillator, which in turn modulates the regular high-frequency carrier. Both time-multiplexing and PCM were used in the Explorer VI. PCM offers great possibilities as a television transmission system, and Bell Labs is actively at work on this idea also. In microwave radio, PCM promises practically interference-free transmission. And since a PCM signal is easily applied to magnetic tape, it is ideal for missile and satellite telemetering as well. Compared to other forms of pulse modulation, PCM has the sole disadvantage of a wider bandwidth requirement. But as telemetry systems move from the lower megacycle bands to the 2200-mc. region, this disadvantage becomes less and less important.

An Exciting Future Pulse modulation is no longer just theory—it is a reality. Young as it is, pulse modulation is the giant behind the front-page news of space exploration. As we explore the frontiers of outer space, and as we search for ways to improve and increase the information-handling capacity of our existing communications systems, it becomes increasingly evident that pulse modulation is one of the most exciting developments of modern electronics. PE

About the Author Herbert Kondo, 1924 – 2012, served four years in the Army during World War II, graduated from Florida State College, majoring in physics, and later earned a master’s degree in history at the University of Chicago. He worked as Senior Science Editor for MacMillan Company and then was editor-in-chief, Science Department, at Grolier/Scholastic Publishing. His books include Adventures in Space & Time—The Story of Relativity (Holiday House, New York, 1966), Albert Einstein and the Theory of Relativity (Franklin Watts, Inc., New York, 1969), and The Moon (Franklin Watts, Inc., New York, 1971). In addition to his contributions to Popular Electronics, his articles on the history of science were published in Scientific American and other scientific journals.

182

technicacuriosa.com / Vol.1 • No.1



Programmable Logic

By Fearghal Morgan

I

nitiatives such as Codecademy, Khan Academy, Coder Dojo, Microsoft TEALS, Arduino, Raspberry Pi, and micro:bit have stimulated a great deal of interest and activity in computer science and embedded systems application programming. Engaging and enabling a global community in coding and processor applications is helping to meet the growing demand for more programmers and engineers.

viciLogic Online Learning and Prototyping Platform Without digital silicon chip technology, application programming would not be possible. The viciLogic team posed the question, “Is it possible to interest and engage the community in how their amazing silicon chips are designed and how the digital logic hardware ‘inside’ processors executes program instructions? Is it possible to do so online, and is it possible to even enable hardware prototyping online?” The team then developed viciLogic, an online,

184

technicacuriosa.com / Vol.1 • No.1


Programmable Logic Introducing viciLogic

directed-learning and hardware prototyping platform for digital logic chip design.

viciLogic learning—how does it work? To use viciLogic, just register and log on at vicilogic.com. viciLogic transparently connects users to real, state-of-the-art Xilinx and Intel integrated circuit devices in the cloud. The user’s internet browser reaches across the web to control and sense signals inside the connected silicon chip in real time, and uses this signal state information to animate and visualize real chip behavior in the browser. viciLogic online course lessons direct users to learn-by-doing with viciLab Remote Hardware Prototyping and Graphical UI Creation. Watch the video here.

185

technicacuriosa.com / Vol.1 • No.1

real hardware interaction. viciLogic is a very effective complement to traditional digital systems learning approaches.

viciLogic prototyping in the cloud—how does it work? viciLogic enables users to prototype their own hardware in the cloud and to create an interactive visual console which connects to the operating hardware design. The prototyping application uses free industry-standard design and implementation tools, provided by Xilinx and Intel. The learning and prototyping platform focuses on practical digital chip design, using hardware description language coding to describe the chip logic, and conversion to hardware being performed by the modern industry-level


Programmable Logic Introducing viciLogic

tools. viciLogic reduces the focus on older approaches, no longer applied in modern chip design.

Communities who can benefit from looking inside chips with viciLogic Making the learning of digital logic chip design accessible, with practical hands-on activity, can help develop logic and programming skills, as well as problem-solving skills. viciLogic can support university and industry-level courses, ranging from fundamental to advanced digital systems design, all with related project prototyping. viciLogic can also provide high school Some benefits of looking inside chips with viciLogic. Watch the video here.

186

technicacuriosa.com / Vol.1 • No.1

teachers with online structured mastery of fundamental digital logic design concepts to enable and support their introduction in the classroom with learn-by-doing practice. Knowledge of digital logic chip design and applications can better inform high school students in potential engineering or computer science career choices.

Processors: The whole is greater than the sum of its components At the core of computers are digital processors, which can contain millions of electronic circuit logic components working together to perform amazingly fast logical operations on digital data. Each fundamental component has a purpose, e.g., to count, add, transfer data to/from memory, etc.


Programmable Logic Introducing viciLogic

The synergy resulting from combining components to perform a significant task is common in many areas. viciLogic users learn practically about the behavior of each of the fundamental digital logic building block components and the synergy resulting from connecting components to build more powerful and amazing systems such as processors and applications. An analogy to the processor is a group of connected individual musicians comprising an orchestra, each contributing a vital component to something significantly more amazing than their individual performances. Illustrating the analogy further, a processor (orchestra) enables a programmer (conductor) to instruct a network of digital components (musicians) using a sequence of programming instructions (music score) to execute a significant task (performance). With a new set of instructions (score), the processor (orchestra) can execute (perform) a wide range of programs (music compositions).

Can we benefit from making knowledge of digital chip technology more accessible? Improving knowledge of “what’s inside” a silicon chip can benefit players in the programming and engineering communities. Some argue that it is not necessary to understand the detail of how a processor works in order to become an accomplished programmer. Others argue that

187

technicacuriosa.com / Vol.1 • No.1

the significant focus on computer science and programming (using the instruction as the fundamental building block) is obscuring the amazing world of chip design and the technology “inside.” The viciLogic team believes that potential opportunities and benefits can arise for those who have a better understanding of the underlying processor technology. The viciLogic online learning and prototyping platform with hardware in the cloud offers programmers, engineers, and students the choice of getting that look inside.

Getting started with viciLogic To find out more, register on vicilogic.com and select GET STARTED, or get in touch at info@vicilogic.com. PE

ABOUT THE AUTHOR Fearghal Morgan is a lecturer/ researcher in Electrical & Electronic Engineering at the National University of Ireland, Galway (NUI Galway). He spent seven years as a design engineer at Digital Equipment Corporation. Fearghal’s research and teaching interests include online learning and prototyping of reconfigurable computing (FPGA and System on Chip) applications, and hardware neural network applications. In 2009, he was awarded the NUI Galway President’s Award for Excellence in Teaching and Learning.


Prototyping

The next wave of electronics manufacturing is here, and it involves using specialty 3D printers and advanced materials to build fully functional electronic parts.

By Simon Fried 188

technicacuriosa.com / Vol.1 • No.1


Prototyping Nano Dimension

Advances being made today in this burgeoning new category of printers will eventually lead to capabilities for the 3D printing of complete functional devices that contain embedded electronics.

I

n a state-of-the-art facility tucked amid a technology-rich suburb just south of Tel Aviv, Nano Dimension has created the next wave of 3D-printing technologies. The company’s electrical engineering and design teams share the building with its lab-coated nano-particle ink specialists and 3D software engineers, and together they have created the world’s first 3D printer for professional printed electronics. 3D printing, also known as additive manufacturing, has been available in various forms for nearly 30 years. During that time, but most particularly in the last decade, it has revolutionized product design and manufacturing methodology by using 3D-printing solutions to produce prototypes and custom parts in a fraction of the time required by traditional subtractive manufacturing. This innovative printing format has introduced new levels of efficiency and productivity to support increased innovation and generate new revenue. The benefits of 3D printing: With the ability to turn a prototype or end-use part in a matter of hours, 3D printing is becoming an essential prod■ Improved time to market uct development technology and is designated as the next by avoiding outsourced industrial revolution. But what traditional 3D printing hasn’t manufacturing delays. been able to do is make fully functional electronic parts. The breakthrough at Nano Dimension, the DragonFly ■ Reduced product develop2020 3D Printer, has created significant buzz among elecment cost by enabling all tronics engineers because by using this high-resolution inkteams to work simultaneously jet 3D printer and advanced nano materials, developers can on all design projects, maxicreate—layer by layer, in one system—electrically functional mizing resource utilization and objects. In these early days, additive manufacturing for printed electronics is primarily used for prototyping printed circuit improving overall productivity. boards (PCBs), antennas, and experimental electronic circuits. ■ Improved design quality by Designers and engineers use additive manufacturing soluenabling more iterative design tions to print and test their designs, make modifications, and processes and so avoiding reprint again as needed. They also use it for printing smallbatch electronics and developing parts that can’t be made costly changes to designs.

189

technicacuriosa.com / Vol.1 • No.1


Prototyping Nano Dimension

in any other way, reshaping the way electronics are made. This new genre of 3D printers is aimed squarely at the growing demand for electronic devices that require increasingly sophisticated features and rely on multilayer printed circuit boards. Demand for circuitry, including PCBs—which are at the heart of every electronic device—covers a diverse range of industries, including consumer electronics, medical devices, defense, aerospace, automotive, IoT, and telecom. These sectors can all benefit from Nano Dimension’s 3D-printed electronics solutions for rapid prototyping and short-run manufacturing.

Top Challenges in Electronics Design One of the reasons the DragonFly 3D Printer is resonating with manufacturers is because it addresses the primary challenges of designers, engineers, prototyping teams, and CEOs: transforming a concept into a market-ready functional product quickly and cost-effectively. The electronics industry is exceedingly competitive and has been for years. Meanwhile, product complexity is continuously rising in line with the trend toward smaller packages, thinner devices, and higher functionality. To remain competitive, companies must develop more innovative products while also understanding that timeto-market and product lifetimes are constantly decreasing. In a survey conducted by the Aberdeen Group in October 2015, increased product complexity was the main concern of PCB designers, while improved time to market was identified as a primary business objective, ahead of the need to reduce product cost and improve product quality. Time to market is a summation of the time required for the PCB to be sent to outsourcing and back for concept validation and rapid prototyping. The Aberdeen Group survey showed that electronics design and development companies need product development cycles that are shorter, more

190

technicacuriosa.com / Vol.1 • No.1

The DragonFly 2020 3D Printer for the production of professional PCBs works with conductive and dielectric materials.


Prototyping Nano Dimension

agile, and efficient. In addition, decisions need to be made in the preliminary stages of the product development cycle to avoid costly mistakes and rework further down the path.

Eliminating Risks The DragonFly 2020 3D Printer was purpose-built to replace the time-consuming PCB development process and to help reduce the risky process of sending designs out-of-house to prototyping facilities. PCB prototypes are critical to the development of almost all electronic products because they permit physical boards to be tested to determine conductivity, shape, and functionality under actual operating conditions. In the world of electronics, making professional PCB prototypes using traditional manufacturing techniques and outsourcing is time-consuming. Couple that with increasing demands for more advanced printed circuit boards, including higher layer count PCBs and smaller and lighter boards for many applications, such as IoT, smartphones, medical devices, and military equipment, and the problems become clear. But there is also a very long list of things that must be done right to get from concept to shipping a hardware product. As technical complexity increases, so do prototyping costs and delays. Producing a multilayer circuit is a tedious, multistage process. It includes milling, drilling, film transfer, and plating machines; copper etching baths; and a press. Assembly adds additional setup time, and any added complexity or multilayer counts bring significant cost increases and delays. This long manufacturing process may be efficient when used to produce large numbers of boards. However, when producing a single prototype board or small volume order, it is very expensive, since the entire process typically must be carried out from start to finish.

Delays and IP Risks With today’s electronics, most PCB prototypes are produced by traditional subtractive manufacturing methods, often by overseas vendors (most of which are in Asia), that typically require several weeks lead time. The original equipment provider contracts with the prototype manufacturer and then waits for the board to be produced, shipped, and clear customs before it is delivered for testing.

191

technicacuriosa.com / Vol.1 • No.1


Prototyping Nano Dimension

In this typical scenario, standard turnaround times are generally two weeks, although multilayer PCB prototypes can often be produced in less time for a substantial “rush fee.” Costs to build a prototype of a multilayer board are high, and many prototype board manufacturers specify a minimum order quantity of 10, which often exceeds the number of boards needed for testing. Adding together shipping costs, logistical delays, and tax duties, the cost of getting one multilayer PCB prototype from an external service provider may equal the cost of 100 units, and can take anywhere between a few days to three weeks, depending on the location of the supplier. Moreover, when design complexity rises, the PCB manufacturing time grows exponentially. That’s because the higher process complexity requires more stages and machinery, often stretching the lead time to months. It’s easy to see why with these traditional techniques make iterative design and testing very challenging. In many cases, in the race to develop new product designs quickly, the first PCB design doesn’t work perfectly. That translates into several iterations of the board. But with time and money at a premium, the odds of failure increase because there is not enough time and resources to adequately test more than a few design iterations when a PCB is outsourced. So even after the PCB prototype has been produced and tested, problems might be discovered that require design revisions. When that happens, the PCB is refabricated, further increasing the lead time and cost for each product. Even a tiny mistake in design or a poor circuit can lead to product recalls and other quality problems that can seriously hurt a product’s sales and a vendor’s reputation. Another thing that can’t be overlooked is the concern companies have about sending valuable intellectual property (IP) to an overseas supplier, which is especially true for companies that work in sensitive industries like defense. To assuage that worry, companies that make products with national security considerations often minimize the risk of confidential IP falling into the wrong hands by limiting independent service providers to offshore manufacturers with the required level of security clearance. Not surprisingly,

192

technicacuriosa.com / Vol.1 • No.1

3D-printed object combining dielectric and conductive materials, replacing traditional MID processes.


Prototyping Nano Dimension

this “security-cleared vendor” process typically adds considerably to both costs and delivery time for producing prototypes.

The In-House Printing Advantage Printing prototypes in-house in a matter of hours helps companies significantly reduce PCB design and test cycles from months or weeks to days or even hours. Moreover, the freedom of printing on demand gives development teams more leeway to introduce new and innovative development processes at every prototyping stage, from concept verification to design validation to functional performance—all of which helps reduce time-to-market and increase innovation. Because it is an advanced printing platform, the DragonFly allows designers and engineers to develop different sections of the circuit in parallel—and they can test, fail fast, and iterate on the fly to accelerate creativity and improve quality. The result is reduced overall development time, improved innovation, and fewer development risks and rework. Further, companies can bring better electronics to market faster without exceeding budget, while keeping sensitive design information in-house. Beyond PCBs, the DragonFly 2020 can also produce non-planar 3D objects that contain 3D circuitry, moving 3D printing beyond traditional electronics formats of multilayer flat circuits to electrically functional structures for a wide range of development, custom, and small-scale production projects. The application possibilities are endless, including flexible, rigid PCBs and embedded components. What this all means is additive manufacturing offers designers more freedom in their designs, accelerates the design and manufacturing process, and increases the efficiency of producing customized products. That said, it remains likely that volume production printing will continue through traditional channels for the foreseeable future.

How it Works Integrating high-resolution inkjet technology and advanced nano inks, the DragonFly 3D printer features X, Y, and Z axes—for width, depth, and height.

193

technicacuriosa.com / Vol.1 • No.1

PCB Design Layer by Nano Dimension.


Prototyping Nano Dimension

The printer’s extremely precise deposition system has advanced inkjet print heads with hundreds of small nozzles that allow for the simultaneous 3D printing of silver nanoparticle conductive inks (metals) and insulating inks (polymers). What is also critical is that the printer sets new standards for accuracy, complexity, and speed in the fields of 3D printing and professional electronics development. After 3D printing on the DragonFly, there is no need for post processing. The printer works with Nano Dimension’s dedicated proprietary software, Switch software, which converts complex 2D renderings and schematics such as Gerber files into layer-by-layer print instructions for 3D printing. The Gerber file is loaded into the printer’s interface, and then the system automatically calculates the ink drop placement. The software enables a full range of PCBs, including interconnections, through-holes, and complex geometries to be printed—without etching, drilling, plating, or waste—to produce professional multilayer PCBs within hours. The significant breakthrough of multilateral 3D printing also allows designers and engineers to print polymers and metals together for a very specific functional goal, such as meeting demands for compact, stronger, and smarter electronics.

About the Inks The DragonFly uses Nano Dimension’s AgCite™ Conductive Inks, advanced silver nanoparticle inks designed specifically for the printer. The inks are customizable for various applications. The nanoparticle sizes and distributions of the silver particles are optimized for

194

technicacuriosa.com / Vol.1 • No.1

Multilayer PCB printed on the DragonFly 2020 3D printer .


Prototyping Nano Dimension

the printing of highly conductive PCB traces. The dielectric ink used in conjunction with the silver inks mimics industry FR4 and creates insulating barriers that support the optimal performance of the electronically conductive inks. Importantly, these dielectric inks are a key enabler of the printing of the entire PCB structure.

The Printing Process The DragonFly 2020 3D Printer deposits AgCite silver conductive ink and the FR4-like dielectric material in building a complete circuit or multilayer PCB layer by layer. Starting from a sacrificial substrate, the materials are built up from the underside conductive traces to finish with the topside conductors or pads. This process means that vias are built up, drop by drop, either as blind, open, or complete vias, with conductive traces that are about 100 microns (3.94 mil) in width. The additive process means that vias are easily printed as filled vias. The DragonFly makes several passes to create the width and thickness required for one line. An integrated process cures and sinters the inks during the build to achieve the desired shape, conductivity, and adhesion. Plated and non-plated through-holes are created by repeatedly leaving a space at a particular XY coordinate, thereby building surrounding materials up around a void. The dielectric ends up as a solid piece within which the conductive traces are positioned at the precise XYZ coordinates specified. Upon completion, a multilayer PCB 3D printed on a DragonFly 2020 3D Printer can be soldered using low-temperature solder. There is no limit to the layer counts that can be printed on the DragonFly beyond the mechanical height of the printer’s Z axis. The speed of the print depends on the number of layers and the complexity and conductivity of the circuits, as well as the board size. For example, a large complex, 10-layer board can be 3D printed overnight. The ability to 3D print from Gerber files enables designers and engi-

Gerber Files

195

technicacuriosa.com / Vol.1 • No.1

Switch

Job

Print Driver

Nano Dimension PCB Production Flow.

DragonFly


Prototyping Nano Dimension

neers to prepare PCB jobs for printing on the DragonFly. Nano Dimension’s Switch software (job editor) accepts the most common file formats in the electronics industry, Gerber and Excellon, thus streamlining the journey from design to functional object. Once the design is set for printing, the user simply loads the design file straight to the printer. Multilayer 3D files can be edited and prepared from standard files. Parameters including layer order and thickness can be adjusted through the software as well.

What the DragonFly2020 Enables Proof-of-Concept • Early stage proof-of-concept circuits are the first step in demonstrating feasibility to reach the product design solution faster. With the DragonFly 2020 3D Printer engineers can create a proof-of-concept, fully functional prototype overnight rather than in weeks and even have the time to conduct multiple iterations of other prototype designs. This is ideal for presenting new ideas to key stakeholders, to quickly determine if the concept idea is technically feasible, and to discover if there are areas for improvement very early in the design process. Design Validation • Early and accurate design validation is also important to see whether a design performs to design goals and specifications. It’s also crucial for pinpointing critical issues early in the design cycle. The DragonFly 2020 3D Printer creates functional circuits that incorporate conductive and dielectric materials with very fast turnaround, making it possible for engineers to compare several circuit designs quickly and work efficiently and accurately. In addition, 3D-printed electronics foster a ■ In general, the more design and engineering culture of fast and frequent complex the board, the design iterations, resulting in better products. Agile Hardware Development • Agile hardware electronics development means that design teams can get a comprehensive and intuitive review of the product at the end of each phase to save time money and to improve the end product. This is especially important in hardware electronics development where traditional manufacturing techniques have meant that lead times are too lengthy and expensive to apply numer-

196

technicacuriosa.com / Vol.1 • No.1

higher the cost benefits of 3D printing. For example, a typical 4”x4” 10-layer board costs around $450 through traditional manufacturing vs. $150 - $250 for the DragonFly 3D Printer.


Prototyping Nano Dimension

ous design iterations. The ability to create multiple iterations helps improve development cycle times and helps reveal potential flaws earlier. With the DragonFly 2020 3D Printer the turnaround time of producing circuit prototypes for release at the end of each phase is quick, depending on the design, size, and complexity, and it’s also independent of internal bureaucracy and external factors. Low-Volume Production • Additive manufacturing processes can be used for producing PCBs to enable low-cost and ultra-quick turnaround for small production volumes. This enables print on demand and customized circuits for small and flexible lines of production and end-user products.

What’s Next? Over the coming years, users should expect a variety of advancements and increasing affordability in traditional 3D printing, but the more exciting space to watch will be 3D printing for industry. What’s on the horizon for 3D printing for industry? Faster printing speeds, higher resolution printing, increased printing sizes, and the capability to incorporate multiple types of materials, including the ability to mix more materials while printing a single object. We can expect to see a broader range of advanced materials and potentially higher quality materials to provide additional functions to the final parts. We should also anticipate advances in combining materials: 3D printers can use metals and polymers or metals and ceramics in the same print job, for example, which will make it possible to build electrical capabilities into mechanical objects. We’ll also see a greater convergence of 3D printing and electronics, eventually leading to the days when 3D printers will produce fully functional objects that incorporate many modules, such as embedded electronics, sensors, and more to disrupt, reshape, and redefine the future of how electronics are made. PE

About the Author Simon Fried is Co-founder and Chief Business Officer at Nano Dimension where he leads business development, marketing, and product management. He holds an MBA from SDA Bocconi & HEC Paris, an MSc in Behavioral Economics from Oxford University, and a BSc in Risk & Choice from University College London.

197

technicacuriosa.com / Vol.1 • No.1


Machine Learning

Are you a citizen data scientist or plan to become one? If so, then you have quite an adventure ahead of you. Like all your data science peers, you will also discover a few challenges as you begin to practice your art. Two of the foremost difficulties involve creating accurate machine learning models, and then being able to explain those models’ prediction results. By Gang Luo and John Schroeter

198

technicacuriosa.com / Vol.1 • No.1


Machine Learning Introduction

These are the two big issues we take on in this article, as we present two recent advancements that make the machine learning process automatic and transparent. But first, what exactly is a machine learning model? Simply stated, a model is an algorithmic construct that produces an output when given an input. A model is the product of training a machine learning algorithm on a training dataset.

Examples

Input

technicacuriosa.com / Vol.1 • No.1

Schematic concept courtesy of Google.

Who does what?

Model

Prediction

Parameters

Learner

The training data are the key, as they contain the “right answers� that the machine learning model will reference as it makes its predictions when presented with new data. Now here is the rub: building an accurate model is a time-consuming, iterative process involving a rather tedious sequence of

199

A high-level view of a typical machine learning architecture.

Truth


Machine Learning Introduction

steps. The folks at Amazon Web Services summarize these steps as follows: 1 ■ Frame the core machine learning problem in terms of what is observed and what answer one wants the model to predict. 2 ■ Collect, clean, and prepare data to make them suitable for consumption by model training algorithms. Visualize and analyze the data to conduct sanity checks of data quality and to understand the data. 3 ■ Often, the raw data (input variables) are not represented in a way that can be used to train an accurate model. In this case, one should try to construct more predictive input representations or features from the raw input variables. 4 ■ Test many model configurations iteratively. Each configuration involves a machine learning algorithm and a specific setting of its tuning knobs. For each configuration, feed the features and training data to the learning algorithm to train a model. Evaluate model quality on data held out from model training. Select the best model configuration from the many tested ones. The corresponding model becomes the final model. 5 ■ Use the final model to make predictions on new data instances. Step 4 involves model selection, a fundamental task of scientific inquiry. Not all model configurations are created equal. Given the myriad mechanisms and processes of data generation, how can one select a good model configuration that can result in accurate predictions on incoming new data? As it stands today, comparing different algorithms and settings of their many tuning “knobs” is a trial-and-error process. The knobs one tunes on a machine learning algorithm are called “hyperparameters.” Let us put this in the context of a deep neural network. The network designer must make many decisions on hyperparameter values prior to model training. In a convolutional neural network, examples of such decisions include: for each convolutional layer, how many features will be used and how many pixels are contained in each feature? For each pooling layer, what window size and stride will the model use in traversing an input image? For each layer type, e.g., pooling layer, how many layers of this type will the model include? In what order will the layers be arranged? Keep in mind that some networks can have hundreds or even more than one thousand layers.

200

technicacuriosa.com / Vol.1 • No.1


Machine Learning Introduction

Other examples of hyperparameters include the choice of kernel used in a support vector machine and the number of neighbors k in a k-nearest neighbor classifier. With several dozen commonly used machine learning algorithms, so many hyperparameters, and numerous possible values of those hyperparameters, one can reasonably expect to test only a small fraction of all possible configurations. In other words, unless you are very lucky, you will likely be settling for less than the best possible model configuration and accuracy. What is more, each time you train an algorithm on a training dataset with different settings of hyperparameter values, you obtain a different model. Moreover, good combinations of algorithms and hyperparameter values vary by the specific modeling problem and are unknown beforehand. The combination must be specified before model training starts and needs to be iteratively refined to find one producing an accurate model. Indeed, hyperparameters control many aspects of an algorithm’s behavior and also impact use of computing resources.

Typical deep neural network layers.

(0.94) (0.01) (0.03) (0.02) convolution + nonlinearity

max pooling

convolution + pooling layers

vec

fully connected layers

The Model Building Process Given a specific modeling problem, the user first selects a machine learning algorithm and sets a value for each of its hyperparameters. Then the user trains the model on a training dataset. The model’s accuracy will likely be low at this stage. This brings us to an important question: what kind of error rate can your application tolerate? For most applications, accuracy matters a great deal.

201

technicacuriosa.com / Vol.1 • No.1

Nx binary classification


Machine Learning Introduction

To improve results, the user changes the algorithm or its hyperparameter values and re-trains the model. The user can expect to repeat this process for several hundred or even several thousand iterations to obtain a model with satisfactory accuracy. By now, you should have gotten the idea that model selection is a big problem. The selection process is not only labor intensive, but also requires a level of machine learning expertise generally beyond that of a lay user. In fact, the selection process can be difficult even for machine learning experts. So what to do?

Efficiently Automating Machine Learning Model Selection To overcome this difficulty, we recently developed an automatic method to quickly find a good combination of a machine learning algorithm, feature selection technique, and hyperparameter values for a given modeling problem when many algorithms and techniques are considered. Model selection is fundamentally a search problem. Our key idea is to use system techniques like sampling and caching to improve search efficiency. We notice that it is time consuming to test a combination on the whole training dataset. In one example, it took two days on a modern computer to train an award-winning model just once on 9,948 patients with 133 features. Yet, having rough estimates of multiple combinations’ potentials is sufficient for guiding the search direction. To estimate a combination’s potential, there is Compared to a no need to train a model on the whole dataset to completion with full precision. Instead, we can process a sample state-of-the-art of the dataset, conduct lower-precision computations, and automatic model selection perform fewer iterations of going through the training set without waiting for all parameter values of the model to method on 27 benchmark fully converge. datasets, on average our To this end, and to expedite the search process, we perform progressive sampling, filtering, and fine-tuning to method cut search time by quickly reduce the search space. We perform fast trials on a 28-fold and classification small sample of the dataset to eliminate unpromising comerror rate by 11%. binations. We then expand the sample, test and fine-tune combinations, and progressively shrink the search space in

202

technicacuriosa.com / Vol.1 • No.1


Machine Learning Introduction

training sample

search space

several rounds (figure above). In the last round, we narrow down to a small number of combinations and choose the final combination from them. How effective is our method? Compared to a state-of-the-art automatic model selection method on 27 benchmark datasets, on average our method cut search time by 28-fold and classification error rate by 11%. On each of these datasets, our method can finish the search process in 12 hours or less on a single computer. Not bad! Next, let us move on to the second conundrum in machine learning: understanding models’ prediction results. People have raised many concerns regarding machine learning and its impact on society. A particularly serious concern involves most machine learning models’ lack of transparency. A model can be inscrutable. If the means by which a model makes a prediction is concealed, how can the prediction result be trusted? One cannot simply look under the hood of a deep neural network model to see how the model operates. The model’s decision-making process is hidden in an indecipherable network of interconnected layers, each with possibly thousands of nodes. In other words, the model is a black box. Predictive modeling is a key component of solutions to many healthcare problems. Among all predictive modeling approaches, machine learning methods often achieve the highest prediction accuracy. But, as most machine learning models give no explanation for their prediction results, their deployment is hindered. For a model to be adopted in typical healthcare settings, interpretability of its prediction results is essential. Giving explanations can help identify root causes for bad outcomes and targeted preventive interventions.

203

technicacuriosa.com / Vol.1 • No.1

The relationship between the training sample and search space in our automatic model selection method.


Machine Learning Introduction

This concern of interpretability is not limited to healthcare. The lack of transparency of machine learning models raises new challenges for myriad social issues, for example, in ensuring non-discrimination and due process, as well as understandability in decision-making. FAT/ML—the Fairness, Accountability, and Transparency in Machine Learning organization—has pointed out that “... policymakers, regulators, and advocates have expressed fears about the potentially discriminatory impact of machine learning, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions.” For all these reasons, the European Parliament has adopted a law for the European Union, which grants citizens the right to obtain an explanation for algorithmic decisions that significantly affect them. When this law becomes effective in April 2018, significant penalties for noncompliance could ensue for large internet, credit card, and insurance companies that routinely use black-box machine learning models for purposes such as personalized recommendations, computational advertising, and credit and insurance risk assessments.

Automatically Explaining Machine Learning Models’ Prediction Results To overcome this difficulty of non-transparency, we recently developed a method to automatically explain the prediction results of any machine learning model with no accuracy loss. Our method uses a data mining technique to give explanations. We observe that prediction accuracy and giving explanations of prediction results are frequently conflicting objectives. Typically, a model achieving high accuracy is a complex black box, whereas a model that is easy to understand, such as Our key innoa decision tree, achieves low accuracy. It is difficult to obtain a vation is to model that achieves high accuracy and is also easy to understand. separate explanation from Our key innovation is to separate explanation from prediction by using two models concurrently, each for a different purpose. prediction by using two The first model makes predictions to maximize accuracy. The models concurrently, each second uses class-based association rules mined from historical data to explain the first model’s results. Everybody understands for a different purpose. rules. For each data instance whose dependent variable is

204

technicacuriosa.com / Vol.1 • No.1


Machine Learning Introduction

predicted by the first model to take an interesting value, the second model shows zero or more rules. Each rule gives a reason why the data instance’s dependent variable is predicted to have that value. In many applications, the first model’s end users do not need to understand the model’s internal working. Rather, the end users only see the first model’s predictions and only need to know the reasons for these predictions. The rules provided by the second model provide those explanations. How does this work in practice? We demonstrated our method on predicting type 2 diabetes diagnoses in adults. An example rule is that in the past three years, if the patient had ≥5 diagnoses of hypertension AND prescriptions of statins AND ≥11 doctor visits, then the patient is likely to have a type 2 diabetes diagnosis in the next year. Our method explained the prediction results for 87% of patients whom the first model correctly predicted to have type 2 diabetes diagnosis in the next year.

Summary We have surveyed here two of the foremost difficulties faced by machine learning model developers: selecting a machine learning algorithm and its hyperparameter values, and bringing transparency to the resulting model’s prediction results. We present novel solutions to both challenges. Taken together, our methods bring high efficiency to machine learning model selection and provide automatically generated explanations for any model’s prediction results. These innovations make machine learning more accessible for critical application development, particularly in places where machine learning expertise is scarce, and help fulfill upcoming regulatory requirements. PE

To Learn More See the two full text articles by Gang Luo et al. at - http://www.researchprotocols.org/2017/8/e175/ - https://link.springer.com/article/10.1186/s13755-016-0015-4

About the Authors GANG LUO is an associate professor in the Department of Biomedical Informatics and Medical Education at the University of Washington, Seattle, WA. He previously worked at

205

technicacuriosa.com / Vol.1 • No.1


Machine Learning Introduction

IBM T.J. Watson Research Center and the University of Utah. He received the BSc degree in computer science from Shanghai Jiaotong University, Shanghai, China, and a Ph.D. degree in computer science from the University of Wisconsin-Madison. To learn more about his work, visit his homepage at http://pages.cs.wisc.edu/~gangluo/. JOHN SCHROETER is publisher at TechnicaCuriosa.com, the home of Popular Electronics, Mechanix Illustrated, and Popular Astronomy magazines. He also consults in the field of high-performance computing for deep learning applications.

To learn more about machine learning, convolutional networks, and other applications of artificial intelligence, check out the Machine Learning Channel at TechnicaCuriosa.com for the following free resources:

INTRODUCTION TO CONVOLUTIONAL NEURAL NETWORKS By Brandon Rohrer

AN INTRODUCTION TO COMPUTER VISION A Five-part Tutorial on Deep Learning By Andrej Karpathy

MACHINE LEARNING PROJECTS: A STEP BY STEP APPROACH By Rodolfo Bonnin

MACHINE LEARNING WITH TENSORFLOW Linear Regression and Beyond By Nishant Shukla

A DARPA PERSPECTIVE ON ARTIFICIAL INTELLIGENCE By Dr. John Launchbury

HUMAN-MACHINE INTERACTION GETS EMOTIONAL By John Schroeter

206

technicacuriosa.com / Vol.1 • No.1


We hope you enjoyed this special edition of

We’d love to hear your thoughts and learn more about what you’d like to see in future issues. Contact us here. In the meantime, please subscribe—it’s FREE. And while you’re at it, check out our other titles: eBook SERIES

Stay up to date with fresh articles, events, and new eBooks by visiting and bookmarking www.TechnicaCuriosa.com Please share this news with your friends, associates, and fellow astronomy club members.

eBook SERIES


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.