Technophilic Magazine -- Winter 2012

Page 1


[ Contributors ] Vitalijs Arkulinskis Surabhi Joshi Alexander Kunev Denis Maniti Manosij Majumdar Michael Spivack Joseph Vybihal

[ Interviewees ]

Dr. Eric. D. Green Dr. Tony Chan Carusone Veronika Zlatkina

[ Center Spread ]

Contents

Photography: Steven Daivasagaya HDR Processing: David Daivasagaya

[ Advisory Board ] Dr. David Lowther

[ Image Credits ]

flickr.com/lastquest/1408755246 flickr.com/kellbailey/1816125890 flickr.com/melalouise/4095616672 w3.org/html/logo

[ Sponsors ]

ENG. + BIOLOGY: Don’t Collaborate; Work Together 3 Q&A: Dr. Eric D. Green

4

Evolution is Just a Theory 6 Microprocessors: Intel vs. ARM 7 HTC Raider 4G Review

8

HTML Matures Again 10 McGill Energy Dashboard 12 Q&A: Dr. Tony Chan Carusone 14

Engineering Undergraduate Society

Q&A: Veronika Zlatkina 16 In Praise of Resonators

18

How Apple and Steve Jobs Designed the Future [ Technophilic Magazine Inc. ]

Technophilic is published by Technophilic Magazine Inc. every semester for McGill’s Engineering Undergraduate Society. Robert Aboukhalil / Editor-in-Chief Daisy Daivasagaya / Executive Editor Jimmy E. Chan / Business Development The opinions expressed herein reflect the opinions of their respective authors and may not reflect those of Technophilic Magazine Inc., McGill University or our advertisers. ISSN 1925-816X

19

The Russian Engineer 20 Comics 23


/3

ENGINEERING + BIOLOGY

Don’t Collaborate; Work Together by Robert Aboukhalil In their spare time, many engineers and biologists enjoy writing philosophical letters to editors of scientific journals, claiming that the age of collaboration and multidisciplinary research is upon us, even though we’ve been hearing this for much of the past 20 years.

In those letters, the authors express their concerns about the state of computational biology: “Engineers and biologists should collaborate more”, they urge. And collaborate they must: the current state of affairs is such that a lot of experiments in biology generate so much data that we aren’t able to process any of it with ease. Supposedly, there are oodles of science that we could learn from that data if only we had enough engineers and time, or computer cycles, to analyze it all. One example that comes to mind is sequencing DNA. Currently, sequencing your complete genome would cost you around $10,000 and several hundreds of gigabytes of disk space. That said, companies like 23andMe.com will gladly sequence a small chunk of your genome. For roughly $200, they will mail you a kit that you spit in and then mail back. They extract the DNA from the sample, analyze it and send you the results. That said, they only look at certain regions of your genome known as Single Nucleotide Polymorphisms (SNPs, pronounced snips). These SNPs are locations in your genome where you find a mutated letter in the DNA sequence; such variations

can be thought of as typos in your DNA. While the price of sequencing a genome keeps decreasing (see our Q&A with NHGRI director Dr. Eric D. Green on the next page), the problems encountered with storage and data processing remain a big issue.

«

and have them collaborate to solve all our problems. However, a quick thought experiment would reveal that scenario to be ineffective. The much thrown-around idea that all we need to do is to build research facilities with floors that have both engineering and biology labs to increase interactions is wishful thinking. If the problem originates from dif-

Beyond that, the other issues that creep up include whether sequencing every human on the planet is feasible from the point of view Gather biologists and computer scienof technology, tists in one building, supply unreasonwhether it is useful in terms of the able amounts of caffeine and have them medical information we can reacollaborate to solve all our problems. » sonably extract from looking at sequence data, and whether it is even desirable in the first ferences in research culture, increasing the place, from the point of view of privacy. number of chance encounters in the building will get us nowhere. But let’s go back to our multidisciplinary collaborators. On the face of it, it seems like Collaboration ≠ Working together such a simple problem: Gather biologists and computer scientists in one building, Working together does not mean having supply unreasonable amounts of caffeine biologists conduct experiments and asking computer engineers to analyze the data later. It has been tried for years and has been the cause of much frustration and wasted time.

Biologists who want to plan an experiment correctly ought to have discussions with computer engineers before the experiment to have an idea of which experimental parameters will ensure significant results. Conversely, computer engineers cannot develop data analysis tools without understanding the biology behind the experiment. Otherwise, how could they possibly know about the caveats of an experiment and how those show up in the data? This fantasy world where biologists and computer scientists need only be near each other to foster an atmosphere of collaboration is becoming increasingly absurd ■ FLICKR.COM/MIKOLSKI/3269906279


ERIC D. GREEN

PHOTOGRAPH BY MAGGIE BARTLETT, NHGRI


Q&A / 5

Q&A

Eric D. Green Dr. Green is the Director of the NIH’s National Human Genome Research Institute (NHGRI). We caught up with Dr. Green at McGill’s 2011 Human Genetics Graduate Student Research Day, where he gave the keynote presentation. Scientists often speak of “sequencing the human genome”. Is there really a single genome we can use as reference?

So we can’t simply compare the reference to someone’s genome and look for differences?

You used the key word ‘reference’: the Human Genome Project (HGP) was said to have sequenced the human genome. Really, the more accurate phrase that should’ve been used was: we created a reference sequence of the human genome. You shouldn’t think of the product of the HGP as the sequence of a human being, because that’s not actually true. In fact, what the HGP produced was a sequence of all human chromosomes—roughly 3 billion letters in total. But any given human being has 6 billion letters: 3 billion from mom, 3 billion from dad.

That’s right. You don’t want to ask the question “does my genome differ from the reference?” That’s too simple of a question. The question you want to ask is “given those variants at this particular place in the genome, have they ever been seen before?” If so, how often have they been seen?

«

We now have databases that not only list all the variants that exist but also tell us the frequency with which we see them. So just because you differ from the reference sequence doesn’t mean anything.

So we have to distinguish a refI don’t think that much about the cost erence sequence, of genome sequencing [...] The chalwhich is the hypothetical reprelenge now really lies not in data gensentation of the eration but data analysis » sequence of each human chromosome, from a personal genome sequence, which is the Is all this research happening because full representation of the two copies of each it’s becoming cheaper to sequence? of your chromosomes. A very important aspect is indeed the cost Is the reference genome very different of sequencing that is dropping precipitousfrom our genomes? ly. The first human genome sequence cost us about 3 billion dollars—best $3 billion In some ways, it is. A reference sequence ever spent. Now the cost of sequencing your only differs from the genome you got from entire genome is on the order of $10,000, your parents by about 1 in a 1000 bases, so we’ve gone from a billion to $10,000 in which means that the reference sequence about 8 years. That’s pretty good. But we’re represents what you’ll find in the human motivated to do this primarily because we species at about 99.9%. On the one hand, know we have to: We can’t just have 1 huthat’s incredibly similar. But on the other man genome sequence, we need a whole hand, the richness of what we want to learn lot more. is in that 0.1%; that’s what we’re most interested in if we think about health and dis- Can we go down to $100? ease. Those differences are called genetic variants and they can confer risk for disease We proposed $1,000 in 2003 and we or give protective characteristics. thought we were crazy. We would love it to be cheaper and cheaper but the truth of So generating the human genome se- the matter is that we shouldn’t lose sight of quence is the HGP's attempt at providing a where we are now. The $1,000 is very cheap framework—a reference or starting point— compared to the cost of understanding for being able to understand sequence dif- what it actually means. ferences and correlate those to health, disease, drug response and so forth. I don’t think that much about the cost of

genome sequencing because I think we will eventually coast to the $1,000, or even $100, genome. That’s not where the burden is. Right now, the grand challenge is understanding that sequence: if I handed you your genome sequence—the perfect complete 6 billion letters—you would have to invest a lot of money to understand it. As of now, we don’t yet know how to interpret all that data, so the challenge now really lies not in data generation but data analysis. How different are two genomes? You and I roughly differ by 3 to 5 million single nucleotides, and the great majority of those are completely innocent—they have no phenotypic consequences. A small subset of those do, but we have very little knowledge about how to sift through them. We can make the list of variants but we’re not yet at the point where we can identify which ones we need to focus on and what their effect on human health is. We now need to take these catalogues of variants and start attributing biological and clinical relevance to them—that’s the next decade. Maybe people like you will help us figure it out. Have we spotted some variants that are responsible for, say, cancer? Sure, but we’re still at the very tip of the iceberg. Cancer is a great example because there’s a lot of action: It’s very clear that we need to sequence (and are now sequencing) cancer genomes and cataloging things that come in, and here’s why: You can take 100 tumor samples and analyze them under the microscope: They’ll all look the same. But when you sequence their genomes, you might see that 50 of them tend to have a certain set of variants (i.e. a signature) and maybe the remaining 50 will have another signature. We might even correlate a signature with groups of people that respond poorly to therapy. It would be great if we knew this upfront because it means we wouldn't have to put poor responders through chemotherapy. Instead, we'd look for more appropriate treatments ■


6/

SCIENCE

Evolution Is Just a Theory by Manosij Majumdar As you read this, I urge you to conduct a little experiment. Acquire a bowl of M&Ms and get munching — with one condition. Do not eat the blue ones. Pretend they’re healthy.

The Oxford English Dictionary presently defines about 600,000 words. It aspires to be a list of every single word in the English language, from chutney to kowtow to cromulent and from noob, tweet, and lol to verily, egads, and forsooth. Many of these words have dozens of meanings (‘set’ has 464). Complex fields of inquiry, like science, law and philosophy, invariably need extremely specific words to express their ideas, and they do so by picking up words that seem likely candidates and parsing fine meanings into them. The flipside is that as they develop these obscure lingos of their own, they become increasingly unrecognisable to the common man, and this gap in the meaning and implication of words ends up becoming a gap in communication and what is worse, a gap in understanding. This misunderstanding and miscommunication is more than a mere inconvenience. It is a danger. A danger to the integrity of science, a danger to the faithful dissemination of knowledge, and a catalyst of ignorance. Words matter. Meanings matter. Nuance matters. Context matters. I’d like to limit the discussion here to the definition of three words, which have all suffered by being lost in translation from jargon to common.

«

an astronomer, any more than knowing that this animal is called a tapir or that fluffy bit of mist is called a cloud makes you a biologist or a meteorologist. Science is not only an assemblage of facts; it is a process. The facts are the output of that process. The only way one may claim to be a scientist is to do science. Science isn’t the book; it’s the act of writing it. It is a performance art. The second is ‘theory’, and this is the most misunderstood, miscommunicated, misused, and dangerously muddled word in the modern-day discussion of science. The fact that so many people (especially on this continent) do not seem to know what the word means is a serious cause for anxiety. This endangers the Enlightenment and all that we have built in the last four hundred years. Either we will move forward or we will slide back into the superstition and despair of the Dark Ages. The battle for humanity and for truth will be won or lost on this one word. This is Helm’s Deep. I said before that science is a process. The scientific process or method is the following: observation, of a phenomenon, often natural, and involving the unbiased collection of data; hypothesis, an educated guess as to how the observed process or

A guess is not a theory; a guess is a hypothesis. You might say, in daily speech, “I have a theory” when you’re guessing how something works but you aren’t sure. Well, stop it. You don’t have a theory. What you have is a hypothesis. You wish you had a theory. »

The first is ‘science’ itself. Science is not merely a body of knowledge. The body of knowledge is a consequence of science. “The earth goes ’round the sun” — is that sort of thing all there is to science? No. It’s a scientific fact, but merely knowing that the earth goes ’round the sun doesn’t make you

phenomenon is made possible; prediction, of how a system ought to behave, assuming the hypothesis; experimentation, an attempt to reproduce the observation in a controlled environment; theory, a working model of the phenomenon; and finally, refinement of the theory.

Now read that again. A guess is not a theory; a guess is a hypothesis. You might say, in daily speech, “I have a theory” when you’re guessing how something works but you aren’t sure. Well, stop it. You don’t have a theory. What you have is a hypothesis. You wish you had a theory. A theory is a proven, working model, not a guess. The theory of relativity is not a guess (your gps depends on it). The Big Bang, while impossible to observe or experiment on directly, is supported by an immense body of confirming evidence, including the expansion of the universe and cosmic background radiation patterns. The theory of evolution by natural selection is not a guess, or bacteria and insects would not become resistant to antibiotics and pesticides. PCB-resistant fishes were observed in the United States last year, and a Totally Drug Resistant strain of tuberculosis was reported in India only this January. When somebody says “evolution is just a theory”, I say, “yes, and your point is...?” And for the sake of completeness, theories are falsifiable. They wouldn’t be scientific if this weren’t the case. But a scientific theory can only be superseded by another scientific theory — a better explanation of how stuff works. A theory that is not falsifiable is dogma, and there is no place in science for dogma. (Biology does have something called the Central Dogma, a tongue-incheek name that vindicated itself when exceptions were discovered. Bon temps.) The final word is ‘selection’. When we speak of evolution by natural selection, the natural question is “who did the selecting?” ‘Selection’ in everyday speech is a conscious, purposeful, biased action. We might say with a cynical sneer, “he didn’t get elected; he got selected”. Except … not here. ‘Selection’ in the context of evolution is not a conscious act of picking winners or playing favourites. Selection happens as a re-


/7 sult of not-dying. ‘Natural selection’ can be summed up as ‘making it alive’, and ‘evolution by natural selection’ could very well be called ‘evolution by managing to breed before dying in a painful and decidedly unfunny manner’, because nature really is a jerk. Those lissome gazelles weren’t made to order. The podgy slow ones just got et by big

«

When somebody says ‘evolution is just a theory’, I say, ‘yes, and your point is...?’ »

cats. Sharks aren’t sleek and streamlined because someone selected them out of a catalogue; they’re sleek because their food sources are fast-moving and scattered. A slow shark is a dead shark. The dark-winged variant of the pepper moth became dominant in England during the Industrial Revolution, when it was better able to camouflage against sooty trees, and thus escape its predators; the lighter-winged variant began doing better once clean air laws were passed, the trees stopped being as dark, and their situations were reversed. There are PCB-resistant fishes and drug-resistant strains of TB because the ones that weren’t resistant died before they could reproduce and pass on their genes. * With all that in mind, look at your bowl of M&Ms. If you’ve followed instructions, all that’s left are the blue ones. The blue ones are the ones that you, the predator, didn’t hunt, because they were camouflaged, or toxic, or disgusting. The other colours were slow or visible or weak, and thus easily culled from the population. If the remaining candy could reproduce, the next generation would be all or mostly blue. That was a simulation of natural selection, a demonstration of an essential scientific phenomenon, and an excuse to eat a bowl of M&Ms. Truly, life is beautiful ■

MICROPROCESSORS Intel vs. ARM by Vitalijs Arkulinskis

Looking at an average McGill classroom, it seems that we are living in an age of paper cup coffee and shiny logo laptops. Mobility is key for our generation, and big semiconductor companies such as Intel and ARM know that. We are at a point where there is a conflict between two ideologies of productivity and efficiency, and analysts struggle to make a confident bet on either due to the variety of factors involved. The market used to be dominated by desktops on both home and corporate level, however in the last year desktop sales fell and notebooks took over at 69%. We also see an explosion in smartphone sales that surpass PC sales since last year. The industries on the rise are all powered by ARM architectures. Using it’s success, ARM is now trying to get into the server business, boasting ideal solutions for social networking websites and similar clients that require energy efficient rather than performance heavy computations. At the same time Intel is not ready to give up the crown as the king of semiconductor business and is working hard to optimize its solutions and penetrate the mobile market. Intel chips power most of our servers, desktops and laptops. ARM designs power most of our smartphones, microwaves, ebook readers, and now tablets. Tablets we see today came as an evolution of a smartphone and if they were to grow a keyboard that would pose a threat to netbooks. The simplicity of the ARM architecture allows it to add more cores to improve performance, while Intel needs to employ much more tricky designs such as 3D semiconductor manufacturing, a technology that can benefit ARM in the future too. ARM is taking a top up approach that allows a lot of room for improvement and makes it easier, unlike Intel and the top down approach it has to take which means taking out and simplifying their excellent designs to a point of crippling them. As we have seen with Atom, people don’t like investing

into products they see as hobbled. To fight this, Intel is creating a new category of products called the Ultraportables and is planning to invest around one billion dollars in advertising to create an image of ultra thin and ultra cool Intel based notebooks. However there was a recent internet scandal that shook consumer confidence. During the demonstration, a pre-recorded video was used with a clear intent to misguide the users about the performance. The machines are in fact capable of the claimed performance, and the reason for the lie was simply their fear of something going wrong. Intel is clearly more vulnerable than it looks, but it is still strong with the cash it can use to invest into research and their bold claims of being two years in front of the competition. ARM is a British design company headquartered in Cambridge that deals with designing and licensing semiconductor technologies. Operating with 1700 employees and a revenue of $622.2 million, the company appears small compared to Intel with its $43.6 billion revenue. Nevertheless it poses a serious threat as it gains momentum. The next iteration of Windows is going to be available for both platforms, and there are also rumors of an ARM based MacBook Air. Windows has done a platform shift before, but at that time the new platform allowed a significant jump in performance making emulation of old programs feasible. ARM has equal performance at best. This makes emulation unfeasible and leaves Windows on ARM without its biggest strength - application portfolio. This might mean that we are going to see a rise of mobile operating systems capable of desktop applications and a phasing out of conventional computer technologies. As an engineering student I require high performance machines for CAD, but in my day to day life I do not need anything more than a Minerva-and-Facebook machine, and this is what defines the market today, the lack of a requirement for high performance machines, and a need for a long lasting battery. Either way the competition is fierce and it can only be good for the consumer, and I hope to see more innovations come to pass and hopefully some software to go along with it ■

TRIUMF, Canada’s national particle and nuclear physics lab, has exciting opportunities for students and graduates in the following fields: Physics • Computer Science • Engineering • Chemistry • Nuclear Medicine www.triumf.ca/home/careers-at-triumf/student-programs Follow us on Twitter @TRIUMFlab


8/

HTC Raider 4G Review by Denis Maniti

All of the recent talks over the new 4G networks means little without devices to harness the power of these new powerful networks. In comes the HTC Raider a 4.5-inch, dual core powered phone that is the first LTE phone in Canada to harness both 4G LTE and A Dual Core processor. which was respectably thin in its own right.

While HTC is well respected by the tech community for having some of the most solid handsets available, that’s not to say they haven’t gotten their fair share of criticism. The primary complaint from many Android enthusiasts is that HTC phones, albeit named almost all completely differently, all seem to look “similar”. To some, this is an annoyance as handsets won’t stand out from each other, but I personally see it as a good thing as it provides HTC with an identity, something it seemed to lack before the Nexus One. With the HTC Raider however, HTC clearly took a step away from the usual unibody aluminum shell and instead went for a very Motorola-esque design for the Raider. In fact, many times when taking a quick glance at the phone on a desk it really gave off the aura of the original Motorola DROID/Milestone. The mixed use of metal and plastic also reminded us a lot of the construction of the Samsung Captivate. Many have criticized the design and other have been even harsher, calling it plain ugly. Although I didn’t hate the design, I certainly felt like it was a bit of a step back compared to other HTC offerings like the Amaze 4G. At 11.2mm and 176 grams, the Raider is thinner and lighter than its brethren, the Amaze 4G. To accommodate the behemoth 4.5-inch screen, the Raider is slightly wider than its 4G sibling but just about as tall. The weight can be a bit much for some but it adds to the premium feel that is somewhat lost with the use of plastics. In terms of ergonomics the Raider felt very well balanced but didn’t feel as reassuring to hold as say the Incredible S. The glossy plastic sides and metal back simply do not provide enough friction between one’s hand and the phone. The tapers on the side also felt quite exaggerated and could have been toned down slightly given that it is already slightly thinner than the Amaze 4G,

In terms of overall build quality, the Raider is a bit of a miss for an HTC built phone but still feels more substantial and solidly built than many competitor phones. I do have to commend HTC for stepping out of their comfort zone in terms of design and, while some may not like its looks, it is neutral enough to please most.

When reviewing the Sensation 4G, I expected to see a screen as good if not better than the Incredible S, but sadly this wasn’t the case. The same train of thought was given to the Amaze 4G and again, it was a good effort, but fell short. The Raider has, fortunately enough, broken the cycle of disappointment. The Raider has the best SLCD screen I’ve seen on an HTC device to date, coupled with a very crisp qHD resolution in a 4.5-inch form factor and you get a fantastic reading, gaming and multimedia experience. HTC really brought the A-game when it comes to the Raider’s display. Like almost any high end or flagship Android device you think of today, touch responsiveness, gestures and overall accuracy were as good as it gets. Buttons and Keyboard

Hardware Appearance

8/10

Screen

9/10

Buttons

8.5/10

Internal Hardware

9.5/10

Speaker + Mic.

8/10

Camera

9/10

Software UI Changes Apps/Bloatware

8/10 7.5/10

Final Score: 8.5/10

Screen If we only take into consideration HTC made Android devices, the Incredible S would be the gold standard for Super LCD screens used by the company. With the Raider , we’ve finally come across another Super LCD screen that is truly good enough to compare to the IPS screens and Super AMOLED screens of the mobile space.

Pretty status quo as far as non-touchscreen user inputs to report. Power button and Volume buttons weren’t the most raised buttons, but were good enough to be noticed without looking at the phone. The dedicated camera button will certainly be missed, however, and with a camera as good as the one on the Raider, it really is unfortunate. Battery Life While the Raider was able to get us through a day worth of normal usage, LTE is still a huge battery hog. To stress-test the battery, I enabled the hotspot and tethered 2-3 devices simultaneous while still doing benchmarks on the phone and LTE really took its toll. Fortunately should the situation arise where one would need to tether for that long it’s assumed that there would be a USB port available to keep the phone charged, and this is highly recommended, should you down go this route. Internal Hardware Both Rogers and Bell had a nice lineup of existing dual core phones but the Raider is an amazing addition to both their lineups. Being one of the first LTE phones on both their respective network, it’s obvious that


this phone was going to bring some killer specs: - 1.2 GHz Dual Core Third Generation Snapdragon Processor - 1GB of RAM - 16GB of Internal storage - Adreno 220 Graphic processor - 75Mbps LTE capable chipset - 8MP auto-focus camera (dual LED flash + 1080p HD video) LTE 4G Capabilities Not too long ago, LTE was simply a dream. A future of ultra high speed data that could allow a true, full faceted, multimedia internet experience on mobile devices. Today, that dream is a reality, LTE coverage in Canada is increasing by the day and becoming available to more and more Canadians. 4G has been a been a standard that has been used (and abused) over the past few months to designate any network that has significant improvements over 3G networks. Now depending who you ask, HSPA+ may or may not fall under the 4G moniker but ask those same people about LTE and they most likely will agree that it is 4G or at least the closest thing to 4G we currently have. In our tests on the Rogers LTE 4G network in Montreal, LTE speeds were absurdly fast. Providing speeds that never dropped below 15mbps and reaching just about 30mbps we got the same speeds as earlier tests with the Rogers data stick. Speaker and microphone Like most HTC phones the external speaker and earpiece aren’t of pristine quality but really are just good enough. While I certainly would like to see HTC make immediate strides in the regards of sound, we should expect things to get better as HTC uses the synergy made from their majority acquisition of Beats by Dr. Dre. Camera One aspect where HTC traditionally hasn’t been a powerhouse has been cameras. But this changed in Canada with the arrival of the HTC Amaze 4G. It brought the first HTC phone equipped with what they claim is one of the best lenses and some of the best camera software on any smartphone. This new wave of ultra high end camera experiences on HTC phones is continued with the HTC Raider. Still and video came out absolutely crisp, sharp and had immense amounts of detail. Although, unlike the Amaze 4G, the Raider was missing some of the camera features that the Amaze 4G had in the camera app but still produces quality shots. What I do miss the most however from the Amaze 4G is the dedicated camera and camcorder shutter buttons. The Last Word The Raider is certainly a solid phone that is in some ways built on the solid base of the Sensation 4G/EVO 3D and has the addition 4G LTE connectivity and a vastly improved camera. This overall package is just about as good as the Amaze 4G but with slight differences that make it a slightly inferior phone. That being said, if you’re a user that absolutely need the fastest data speeds, look no further than the HTC Raider. LTE is such a vast improvement over existing 3G technologies that is most certainly deserves the “4G” moniker and makes the Raider one of the best phones in Canada available right now ■

Solving the world’s most important problems One solar cell at a time Graduate student Audrey Kertesz loves solar energy, but she also realizes that most solar panels in cities aren’t efficient. When urban shadows fall, so does efficiency. So Audrey is designing a distributed control system that thinks about the system at a cell level. It generates peak power, even when a few cells go dim. Her work won her the 2010 NSERC André Hamer Postgraduate Prize. Her future? Bright. Got something big to solve? Our Engineering graduate programs can get you closer. MEng: Professional master’s degree with specializations in • Entrepreneurship, Leadership, Innovation & Technology in Engineering (ELITE) • Engineering & Public Policy • Globalization • Robotics & Mechatronics • Computational Mechanics in Design • Energy Studies • Healthcare Engineering MASc: Traditional, research-intensive master’s degree PhD: Highest degree in Engineering

Applications now open. Visit engineering.utoronto.ca


10 /

<!-- GUEST ARTICLE --> HTML Matures Again by Joseph Vybihal In the 1980’s when people began thinking about making the Internet more accessible Apple Corporation launched a product called HyperCard in 1987 that popularized words like hyper-text and hyper-links. HyperCard was the first commercial product that combined a text editor with a database to create note cards that hyper-linked to other cards or files on the computer. Each hyper-word was underlined and linked through a hyper-link (a database entry) to a media file. If you clicked on the underlined word then the other media file would be opened. This product was a huge success and foreshadowed HTML which was introduced in 1991 by Berners-Lee in a document titled “HTML Tags”.

generalized <div> tag. These new layout tags contextualized divisions so that they can be understood by the browser. We will look into this in some detail in this article. I won’t be going over all of HTML5 but only the more important additions. Tags like video and audio existed in various forms within IE, Microsoft’s Internet Explorer, but they were not adopted by other browser companies and remained proprietary to IE. HTML5 brings these ideas into standardization.

It defines the “footer” section of the web page (in this example). The developer can even write CSS and JavaScript code that targets that id attribute, but the browser is still ignorant of its meaning. We cannot pass responsibilities to the browser since it cannot read human languages and therefore cannot figure out the meaning of id=“footer”. Google “Pave the cow paths” and you will see the W3C, World Wide Web Consortium, rational for HTML5. If you look at a hill with cows grazing, you will discover that they have worn down the ground in some area, creating paths. Humans do this as well. Go to any campus or park and you will see paved areas and worn down areas that have turned into paths. W3C data-mined the Internet to see which <div> tag id attributes were most commonly used. Like the cow paths they found the following id attributes as most common and turned them into HTML5 tags to contextualize divisions: <header>, <footer>, <hgroup>, <article>, <aside>, <figure>, <figcaption>, <mark>, <nav>, <section>, and <time>.

In the 1990’s HTML was simply a Hyper Text Markup Language. Meaning it combined the ideas that Apple popularized in HyperCard with another popular idea, a printer scripting language called PostScript (which Adobe uses so successfully in their PDF products). As you know, PostScript was created in 1982 as an advanced text publishing language for printers. Book publishers needing professional fonts and graphics used PostScript to ensure a consistent high quality printout on any printer or software supporting PostScript. HTML was the next logical evolution of these ideas as it applied to the Internet. Internet pages could now be formatted in a pretty way and hyper-linked to other web pages or media stored on the Internet. This helped popularize the Internet in 1995 making it accessible to the general public.

Contextualized Divisions

The original HTML was like the original HyperCard program, a simple easy to use scripting language. It was not professional like PostScript. This reality had plagued HTML for many years. In 2000 CSS, Cascading Style Sheets, was introduced addressing many of the shortfalls in HTML text formatting and layout. CSS permits professional page layout control much like PostScript did for printers. JavaScript together with DHTM permits true programmable elements integrated with the HTML document tags, unlike Java which is less connected to the HTML document. Recently, HTML5 has further advanced HTML scripting by providing more powerful programming elements using the new tag <canvas>, media support with the new tags <video> and <audio>, and layout tags that complement the

The HTML <div> tag is great. It is a generalized tag. Developers are free to use it in any way they see Tag Meaning fit. It gives the deDefines the top constant portion on all pages <header> veloper the ability to isolate sections of a Defines the bottom constant portion on all pages <footer> document and define Primary text area of the current page <section> it as being different <article> Defines topics/paragraphs that reside within a <section> in format and style, but the shortfall is Secondary section-like area with its own articles <aside> that it has no meanArea on the page that contains an image <figure> ing to the browser. This can be partly <figcaption> The caption that goes with the <figure> overcome by adding Highlights a part of the text <mark> an id attribute, <div Helps group the headings of a section <hgroup> id=“footer”>. In this Helps define machine readable date/times <time> case the developer Table 1: HTML5 layout division tags now knows why this <div> tag is present.

To help us understand these contextualized divisions look at figure 1. This figure represents a web page divided into sections as defined by HTML5. Standard web pages look like Figure 1. They commonly have a header and a footer. They have menus, represented here by <nav></ nav>. A primary writing area, represented by the <section></section> tags, divided into one or more topics, using the <article></article> tags. There is often a secondary writing area represented here by <aside></aside>. Multiple articles can also


HTML5 / 11 be placed within the aside. Business, personal web sites and educational sites all have this common layout. There are exceptions to this format but HTML5’s layout tags do not address them.

is that the browser must support SVG and must use up resources to maintain this model in memory. Audio and Video Support

<header> <nav> <section>

<aside>

<article> <article> <footer> Figure 1: HTML5 layout division tags

The new HTML5 tags <audio> and <video> provide codec based audio and video support. No need for Flash or Silverlight. Check out http://www.diveintohtml5.net/video.html for an in-depth explanation of vodecs. Since this is based on codec technology and different browsers support different ones HTML5 supports a graceful fall-back procedure. More than one codec can be specified, similar to CSS font downgrading, the tag will try out each codec one at a time in the order presented. Any text that appears between the <audio></audio> and <video></video> tags will be displayed on browsers that do not support these tags. For example:

Table 1 summarizes the layout division tags. Of course all other HTML tags and CSS can be nested within these new HTML5 tags. HTML5 is not as new as it used to be, but you still need to be careful with browser support. For example, you are probably safe with IE9 (Internet Explorer 9) and Google’s Chrome. Also make sure your editor supports HTML5 syntax. The Canvas Tag The <canvas> tag is simply a container that permits JavaScript, or any scripting language the ability to directly write and draw in the defined area. If the browser does not support this tag then any text written between the <canvas> </canvas> will be displayed on the browser page instead of the canvas area. Using the width and height attributes the developer can specify the size of the canvas. You should understand that the canvas tag should be viewed as an actual blank painter’s canvas that can be drawn on in any way you see fit. Here is example code of drawing a square box in the canvas area: Code <html> <body> <!-- Canvas defined as a 500 x 500 pixel box -->

<canvas id=”myCanvas” height=”500” width=”500”> Your browser does not support the canvas tag </canvas> <script type=”text/javascript”> // Use JavaScript to connect with the // “myCanvas” ID and get a drawing context

var canvas = document.getElementById(‘myCanvas’); var ctx = canvas.getContext(‘2d’); // Draw blue square

ctx.fillStyle = ‘#6666FF’; ctx.fillRect(0, 0, 200, 200); </script> </body> </html>

How does <canvas> compare with <svg>? Canvas is direct and immediate pixel drawing via JavaScript programming. SVG is an XML scripting language that assumes browser support. The developer uses XML to define the shapes they want and the browser then renders the image. This means that the browser maintains an internal memory model of the shape. What’s good about this is that the developer can use a programming language to manipulate the shape as an object. What’s bad about this

Code <!-- Notice the multiple <source> tags. The mp3 codec will be tried first. If that does not work then the ogg codec will be tried. -->

<audio controls=”controls”> <source src=”boo.mp3” type=”audio/mp3” /> <source src=”boo.ogg” type=”audio/ogg” /> Your browser does not support the audio element. </audio>

Here is an example of the <video> tag: Code <!-- Similar to <audio>, you see the graceful fall-back from mp4 to ogg to webm -->

<video width=”320” height=”240” controls=”controls”> <source src=”jump.mp4” type=”video/mp4” /> <source src=”jump.ogg” type=”video/ogg” /> <source src=”jump.webm” type=”video/webm” /> Your browser does not support the video tag. </video>

Browser Support Issues There are a few issues to keep in mind when writing HTML5: 1. Old browsers don’t support it 2. New browsers support only part of the HTML5 standard 3. New browsers do not support the standards in the same way Solution: Write JavaScript code to determine the supported capabilities of the browser accessing your site or use some of the free libraries on the Internet such as “Modernizr” ■ Sample code To download the code listed in this article and for more sample code, visit http://www. technophilicmag.com/html5-tutorial Want More HTML5? To dive deeper into HTML5’s semantic elements, check out: • http://w3schools.com/html5 • http://html-5-tutorial.com • http://w3.ord/TR/html5



McGill Energy Dashboard MY.PULSEENERGY.COM/MCGILL/DASHBOARD

McGill’s Energy Dashboard displays graphs of how much energy is consumed in buildings all over campus.

308.2 kW

Here, we show the consumption of electricity in the Trottier building (HDR picture on the left) during the week of January 16 2012.


TONY CHAN CARUSONE


Q&A / 15

Q&A

Tony Chan Carusone We met Dr. Chan Carusone during his last visit at McGill, where he gave a guest lecture about the ongoing research in his lab at UofT. He and his students are designing nanoscale electronic chips for the communication of information. Some of our readers will recognize him as the co-author of the textbook Analog Integrated Circuit Design. In your talk, you mentioned that optical communication is becoming more practical for shorter distances. What has permitted this reduction? In the past, optical fibers were ultrafine and very delicate strands glass that required careful installation by highly trained personnel and tight mechanical tolerances. The cost of such installations could only be justified for transoceanic telecommunication or similar long-haul communication.

«

high-performance transistors at such low cost that it has transformed the world. CMOS has not only advanced computer chips. Digital image sensors were, for a long time, manufactured using CCD technology. But what really made image sensors ubiquitous was the discovery that by embedding a small circuit alongside each pixel of the sensor, CMOS image sensors can have a quality comparable to CCD sensors. Today, CMOS image sensors and digital cameras are everywhere, and creative uses for them continue to emerge.

Advances in optics have recently allowed thicker and more bendable optical fibers Similarly, CMOS photodetectors with GHz to carry data at rates of 10+Gb/s. That’s bandwidth will enable a whole new set of fast enough to transmit an enLight is significantly attenuated as it proptire blue-ray disc in under 30 secagates through electronic chips, even after onds. Using a only a few millimeters. Light can travel thicker fiber relaxes tolerances along optical fiber for kilometers with little everywhere in or no appreciable attenuation. » the system, making fiber optic installation easier, applications for optical communication cheaper, and more robust. with far-reaching impact on our modern Why is it difficult to do optical commu- information age. Not only will they make optical communication less expensive, but nication at very small distances? more importantly they will enable optical The challenge is to make optical links eco- links to be mass produced and integrated nomical and practical for use over very seamlessly into computing, memory, and small distances. Currently, inexpensive op- wireless technologies using nanoscale tics are capable of communicating at data CMOS manufacturing technologies. rates up to around 14 Gb/s, with research progressing towards commercial systems In your talk, you mentioned that it is difat 28 Gb/s. However, the optoelectronics at ficult to put photodetectors on CMOS. either end of the links (i.e. components re- Why is that? sponsible for converting the data to & from electrical signals) are very similar to those CMOS technology has been refined over used 20 years ago when optical communi- decades to facilitate the fabrication of very cation was reserved for long-haul links. The high performance transistors. Unfortunatecost of these optoelectronic components ly, the requirements of high performance are limiting the application of optical com- transistors conflict with the requirements of high performance photodetectors: Tiny munication in areas that are cost-sensitive. transistors require very thin interfaces beOne area of research in your lab is CMOS tween n-type and p-type silicon, whereas photodetectors. Why is using CMOS an photodectors perform better when these interfaces, called depletion regions, are improvement? thick enough to absorb all photons incident CMOS is clearly the technology of our age. on the detector. It has given us an ability to mass produce

When photodetectors are made using narrow depletion regions, many photons penetrate right through the depletion region resulting in a slow persistent current that can obscure the received data. What has been done to circumvent these problems? Several labs are trying to develop new manufacturing technologies that will permit the manufacture of both high performance transistors AND photodetectors. Unfortunately, those approaches imply increased cost. Our approach is improve the performance of CMOS photodetectors by making clever use of the high performance transistors already available in today’s CMOS at no additional cost. This is analogous to the advances that permitted CMOS technology to revolutionize image sensors. What’s the difference with light travelling on a chip and when it does in optical fiber? Light is significantly attenuated as it propagates through electronic chips, even after only a few millimeters. Light can travel along optical fiber for kilometers with little or no appreciable attenuation. What other projects do you work on? Another project my lab (the Integrated Systems Laboratory, http://isl.utoronto.ca) is currently working on is to improve the energy-efficiency of distributed supercomputing environments by targeting the interconnections within them. The total energy per year consumed by compute servers is 220 TWh, roughly 10% of which is attributable to I/O. Hence, even research that improves I/O energy efficiency by only 1% in these installations yields a savings equivalent to the average electricity consumption of 20,000 homes. Our research promises improvements far exceeding 1% ■


NEUROSCIENCE Veronika Zlatkina

We met Veronika at the annual Eureka Festival in downtown Montreal. She’s a neuroscience researcher at the Montreal Neurological Institute. How much do we know about the brain? We know very, very little about the brain, but much more then we knew say 200 years ago. I think what we know most about is probably anatomy, because anatomy is something that is seen with the eye. We can study the surface and folds of the brain; if we cut a brain, we can study it under the microscope and see its structures. As for the functions, we don’t know much about that. What tools are there for studying the brain? One of them is magnetic resonance imaging, which allows us to understand which parts of the brain are involved in the performance of different tasks—for example, which parts of the brain are involved when we see a red square or listen to an analogy. And, what we know is that we can look at the brain or a portion of it (roughly a 1 cm by 1 cm area) and we can tell what this region is doing but we can’t see individual cells and we cannot say what individual cells are doing. Is it because there are too many neurons in the brain that we can’t understand what they are all doing? There are many neurons and they are very small so at this level—if we speak of the human brain—we can only record the activity of patches of neurons but not of a single neuron. However, the information processing occurs at the level of the single cell. So, we know what the ensemble of cells is doing but not what the single cell is doing. To complicate things further, even if we were able to see the activity of a single cell, we would not know what to correlate it with: For example, if a person is reading a text and see a cell fire, what exactly is the cell responding to? Is the cell responding to a picture, a letter or a thought? This is difficult to answer because most tasks (at least those that are remotely interesting) are very complex. Which animals do scientists most typically use in the lab? Scientists work a lot with, and study, rats and mice. When people study the activity of individual cells, they work with molluscs (such as snails). But we also study


other types of animals such as frogs, which go together with molluscs in terms of individual neuronal activity. Sometimes, scientists also study cats and dogs. Which animal you choose to study depends on what you want to study. If you want to study the brain itself, you study mice, rats and monkeys. But if you study the physiology of the body, then you can work with dogs, pigs and other mammals larger than mice.

hard to do. I believe we will get there but it’ll take time.

We hear a lot that only a certain percentage of our brain is used and the rest is not. So, what does it mean exactly?

But the glial cells keep the neurons alive and functioning. They are the cells that help the nutrients get to the neurons from the blood vessels. And to keep neurons together, we need glia, which translates to “glue� from Greek. In other words, glial cells are involved in maintenance, or house keeping, functions.

People say we use 10% of our brain, but there is nothing that really suggests this is true. You can use an MRI scanner, which allows you to see which parts of the brain are active by showing you how much oxygen is supplied to that part of the brain. So if you place a patient there while they are sleeping or relaxing, and you look at the picture of the brain, every single part of the brain is doing something. When we want to know which part of the brain is working more or harder, we always have to compare our results to the base line, which is when the subject is doing nothing. Then you ask the person to perform a task such as reading a text, and you can identify the language area of the brain. However, all the brain is working at the moment because we know of very few areas in the brain where a damage to that area has no impact on the patient. How far away is research for brain regeneration? People are working on it, I believe it can be done soon. When people work on brain regeneration, they don’t just work with humans. What they do is get samples of neurons from rats or mice and put them in a petri dish and give them nutrients to let them grow and divide. After that, they break them at certain points and try to use chemicals or other techniques borrowed from biomedical engineering to fix the broken points. They can see the results in the dish. But how to go from the dish to the actual specimen is something

2WKHU WKDQ QHXURQV DUH WKHUH RWKHU kinds of cells in the brain? Yes, neurons also have support cells that we call glia, of which we know 3 types. So the brain functions because our neurons are able to conduct the impulses, send and transform information.

What kind of research do you do? That’s my favourite question. I study the anatomy of the parietal lobe, which is one part of the brain. More specifically, I study the sulci, which are the folds on the brain surface. People sometimes think they are just randomly arranged, while others think they aren’t random but are just very hard to study. What we know so far in terms of their anatomy dates back to the beginning of the 20th century. What I do is try to go a bit further using MRI scans of many patients. I’ll study their sulci and try to subdivide the regions in terms of the patterns or irregularities that are formed in the human brain. The reason for studying that is that we believe the sulci to be like landmarks. If you think of the human brain like the Earth, next to rivers, we usually have towns. In the case of the brain, next to the sulci (the rivers), we have different functional areas. The question I‘m trying to answer is: If I subdivide the sulci into basic units, will I also be able to find basic functional units very close? One application of this research is for surgeons who are doing surgery and have a digital image of the brain. This would help them to find out how much of a tumor they can remove safely without affecting the patient’s capabilities â–


18 / Resonators, as the name suggests, are devices that display the property of resonance. They oscillate at their natural frequencies at which the amplitudes are much larger. This means that they can be used to either generate waves that have a certain frequency or to select desired frequencies from a given signal. Therefore, they can act as actuators and/or as sensors! If this definition sounds too abstract, it might be helpful to note that resonators are all around you and can be found in many familiar objects. For example, musical instruments are essentially resonators that are acoustic in nature. Wooden bars of a xylophone, strings, and pipes are all resonators that produce soothing tunes. Resonators are also present in exhaust pipes of automobiles. Here, they work with mufflers and their role is the exact opposite, i.e., to lower any sound or noise! In electrical circuits, mechanical resonators are often used to produce signals of a precise frequency. Finally, these devices are also seen in practically every gadget these days. For example, gyroscopes and accelerometers contain resonators and are used to detect rotation in several smartphones and entertainment units. Yet, very few people are aware of their presence or the significant role played by them. Unfortunately, while iPads bask in their glory, and GPS or satnavs enjoy the limelight, these modest yet powerful devices are not given the same attention. Even as resonators continue to improve your Wii and iPad experience, they are also used constantly in other Microelectromechanical systems (MEMS) devices and applications. For example, they can be found in atomic force microscopes (AFMs) in the form of cantilevers (which happens to be the most commonly used detection methodology). The cantilever is driven at a frequency (close to resonance), and the corresponding variations in the amplitude or phase of the cantilever vibration are detected that indicate the force gradients and hence the topology of the surface one is trying to probe. Resonators are also found in evaporation sources (used to deposit metallic films) within clean rooms where they monitor the rate of deposition. Once the evaporated material deposits on the surface of the crystal oscillator, the resonant frequency is modified and this change is detected by the corresponding circuit. This principle also illustrates the use of a resonator in applications related to sensing.

ENGINEERING

In Praise of Resonators by Surabhi Joshi

A quick note on sensors. Sensors are devices that are typically frequency-modulated, that is, they undergo a change in the output frequency which is related to the physical variable that one is trying to measure. One obviously desires a precise measurement from these sensors. In order to obtain such a measurement, the frequency stability of the sensor’s output should be high. This depends on the damping in the resonator which is described by the quality factor. Therefore, frequency and quality factor are two important parameters when describing a resonator which is to be used in a sensor.

As sensors, resonators currently have various applications. For example, they are used for high sensitivity detection of bacteria, measurement of the stoichiometry of surface compounds, evaluation of hydrogen storage capacity, monitoring of air pollution, laser cooling, nanotechnology, and gravitational wave detection. Impressive breakthroughs have been made in terms of the resolution achieved by these resonators. For example, microfabricated silicon resonators at 4.2 K have managed to demonstrate a mass resolution of ~7 zg (10-21g)! Design requirements. Regardless of the application in which beam or plate resonators are being used and which can range anywhere from sensing (physical, biological and chemical signals) and communications (timers, frequency references, filters) to energy harvesting (conversion of ambient mechanical movement into portable electrical power) and fundamental studies of quantum mechanical systems, three fundamental requirements dominate their design.

Firstly, high values of natural frequency of vibration (10kHz - 1GHz) are desired. This is because high frequencies can assist sensing at high rates as well as match the frequencies of the signals of interest. Secondly, the damping factor must be very low, between 10-6 and 10-4 (i.e., the quality factor, Q, should be high). This will ensure the sensitivity is improved and the frequency selectivity (and hence, the performance) is enhanced. Thirdly, the stability of vibrations must be ensured and there should be minimum drift in the frequency of operation due to nonlinear effects and environmental coupling. If these three requirements are met, the possibilities are endless. Progress in this field will ensure that micro and nano resonators from silicon, metals, graphene, nanotubes, and other materials will serve several functions and open the door to new sets of devices and applications. Ultrasensitive resonators are already assisting the biological sciences and helping them achieve single particle resolution which is invaluable for detecting and/or counting biological nanoparticles, and individual viruses. Resonators are also important for environmental sciences because they can be used to monitor chemical changes within air or water. It is expected that sensitive nanomechanical resonators will continue to improve probe microscopy and metrology. ‘Smart’ clothes, shoes, cosmetics and other wearable devices will become common. Biosensors particularly will benefit immensely. It is thus amazing to see the countless applications that employ these resonators. Their significant role in sensing, detection, and timekeeping truly makes them the unsung heroes in the world of MEMS ■

References P. Meystre, “Cool Vibrations,” Science, vol. 333, pp. 832-833, 2011. B. Lassagne et al., “Ultrasensitive Mass Sensing with a Nanotube Electromechanical Resonator,” Nano Letters, vol. 8, pp. 3735-3738, 2008. L. He et al., “Detecting single viruses and nanoparticles using whispering gallery microlasers,” Nat Nano, vol. 6, pp. 428-432, 2011. K. Y. Yasumura et al., “Quality factors in micron- and submicron-thick cantilevers,” Microelectromechanical Systems, Journal of, vol. 9, pp. 117-125, 2000.


/ 19

How Apple and Steve Jobs Designed the Future by Alexander Kunev

The influence of Apple under the guidance of Steve Jobs, both on the computer world and on the way we view design today is, without exaggeration, monumental. The man whose name came to be epitomised with the Apple brand, was responsible for a primary change in thinking of the way we interact with computer devices. The influence of the Mac, the PowerBook, the iPod and the iPhone upon the world of technology is unmistakable, as they blend the boundaries between computers and accessories. A successful design is one that makes the mechanism by which an object works disappear. Apple products of the latest generation fit exactly the definition. It is exemplary that the iPad is used with ease by both small kids (and even babies) and your grandparents, in a way that they can fully draw on technological innovations such as the Internet and digital movies. Focusing on the experience itself, rather than on how to ‘program’ the device to make the operations you wish to perform, is what differentiates Apple’s philosophy. Ever since the Macintosh was introduced in 1984, Apple’s goal has been to develop user-friendly differentiated products, based on proprietary technology. It’s the idea of having a unique product in a box that you can just open and start working on right away. It is not a coincidence that consumers tend to think of Apple as products that “just work”. And this is all the more true when you compare their operating system to Windows. Differences have largely blended as years go by, but Apple’s OS X is still seen as having no viruses and bugs, letting you focus on the task at hand.

«

And surely, Steve Jobs has clearly stated that he doesn’t wish that Apple be the best, but between the best, as he realizes that other technology companies have their pluses too. What makes Apple stand out, however, is their attention to detail and their customer relations. Jobs was there every step of the way, pushing his employees to perfect each product, and emitting a creative energy rarely seen in traditional CEOs. Apple engineers are not inventors. Strictly speaking, they have not invented a major technology, except maybe multi-touch (though Steve Jobs boasts over 300 patents in his name.) Rather, they are innovators. In fact, one of the most important technology innovators since the era of Ford’s T model. Putting together the first Macintosh, with the Graphical User Interface and the pointing device created by Xerox, they marked

the beginning of the modern personal computer. Designing the PowerBook in the early 1990s to be the first real laptop with trackball and palm rest area was also a major industry leap as all portables up to then had the keyboard in the front. And then came the iPod, the iPhone and the iPad. Though three different products, each with its clearly defined purpose and each bringing a small revolution to its area, they all share a common theme. It was a personal vision of Steve Jobs that computers wouldn’t be just for work, or just sitting on our desks. They would be everywhere, as part of our lives and as such, he would need to create a place for our digital life. Apple was the first one to achieve widely popular sales of digital songs, simple email on a portable handheld device and e-books on a tablet. They developed a successful business model where others failed, making people want to transition into that new digital era by putting all their material on an electronic device. If Bill Gates’ vision was to put the same op-

Apple engineers are not inventors [...] Rather, they are innovators. In fact, one of the most important technology innovators since the era of Ford’s T model » erating system in every computer and have it available to open-source software that would change it in their own image, Jobs’ idea of the future was to make an interface as simple and useful as possible, where the main focus is on viewing digital content and having an optimum input from the user with multi-touch. Microsoft largely succeeded in their vision, and now we are on the track to seeing Apple bring their own vision closer to reality. But to make his products as unique as possible, Jobs took a particular interest in design. He wanted his products to have an object appeal, to be beautiful from the outside too, so as to fit with your home in the same way that a favourite piece of furniture or an engraved book does. He took astounding steps to make sure their design of the 80s, named Snow White was fully developed in

all of their products and was part of each step of manufacturing. Then came Espresso in the 90s with a transition to more curvy devices, which finally made place for the current minimalist design of the iDevices. And though Jobs was away from Apple for almost 10 years, the company still followed his design principles. Other companies are now embracing Apple’s design approach, such as Samsung and HP, who have completely re-branded their products in the last 10 years to appeal to a new generation of Apple followers. Microsoft developed their own mobile OS from the ground up, creating the ultra-minimalist Metro design of the interface that would also make the basis for their desktop system of the future. As he said himself in his famous Stanford commencement speech, Steve Jobs was ready to die, as he considered death to be the single best invention of life—its change agent. He had already played his part and was ready to step down, not only from the company but also from life, in order to make place for the next visionary. If he helped make computers and mobiles a finished product, there are still other technology problems in the world to be solved, such as energy security and human disease. The Future of Software. The next big revolution in technology will surely be the complete integration of mobile devices and tablets with the PC and Mac. Apple has already proposed a starting direction by introducing iOS elements into the Mac OS X. Microsoft has proposed its own plan by aiming to develop a new universal platform, starting with Windows 8, which would be scalable on all devices, no matter what size or shape. But these updates are not the real revolution. What would be the real defining factor is when someone integrates a processor and a mini hard drive to have the same power and capabilities on all devices. Then we would have to create a complete new user experience of how devices interact with each other. And Apple seems to be the frontrunner to create this paradigm shift and to once more, make the maximum number of people adopt it ■


20 /

ENGINEERING HISTORY The Russian Engineer by Michael Spivack

The elements of engineering as I understand them now have, to a great extent, come from my experience as an engineering student. Here, I attempt to explain the utility of engineering within the historical context of Imperial Russia, although my analysis is colored by my own practice of engineering. This has led me to focus on historical details that may seem overly specific, but that are important in the larger context of engineering development. 1) and the Aleksandrinsky Theatre (Figure 2); in St Petersburg, engineers changed the In the 19th century, advances in technology accessibility of the city for its population. helped build railroads across Europe and During the 19th century, many bridges were Asia and the construction of many more for- built to improve mobility within the city. Betresses and castles, and led to the construc- tween 1823 and 1826 the civil engineer Wiltion of ever greater weapons of war. Com- helm von Traitteur planned and executed pared to the rest of the world, Russia stands the construction of five iron bridges within out as an empire that built great works of the city. These bridges were handsome different types of engineering. This essay single-span bridges made of iron; some of argues that, in light of the deficiencies of them still survive today. the Imperial Russian education system, Russia was still able to benefit from the tal- The utility of bridges for the development ent of engineers in order to maintain its of a state should not be underestimated: place as a great power in Europe. Without a reliable means of mobilizing and transporting agents of state power it Great works of engineering occurred is impossible to run an empire as large as throughout many fields in the Russian Em- Imperial Russia. The power of successful enpire. Successful engineering was required gineering in Russia was always being used to build the great edifices of the empire, to advance the cause of the state. In the it changed the way people traveled and planning and execution of war, competent communicated, and was required to aid in engineering is especially important. This is the empire’s expansion and protection. St a field where the structured nature of the Introduction

«

It was not possible to maintain the large number of engineers needed to support an empire while systematically remaining hostile to mass education. »

Petersburg would serve as the capital of Imperial Russia, and to demonstrate the power of the state, impressive buildings were often constructed there. Buildings of great size are needed to successfully house a central bureaucracy as large as Russia’s. Architects and engineers in Russia took the French structural design methodologies for spanning large wooden roofs, and repurposed them for use on iron. Engineers in Russia

discipline is able to shine. In the 19th century, sieges and the defense of cities became the focal point of many battles, and when faced with the challenge of converting a city into a weapon of war it is essential to understand the structural composition of the city. The military engineer Eduard Totleben led in the construction of forts and bastions around the city of Sevastopol during the Crimean War, and these fortifications aided the Russian Army in repelling many allied assaults.

Russian engineers built massive spanning roofs supported by solid iron trusses in order to create the large open areas needed in buildings like the Winter Palace (Figure

After distinguishing himself as a hero of the Crimean war, Totleben went on to successfully direct the Russian siege of Plevna. Russian war engineering was a massive un-

dertaking that required thousands of men to be housed and supplied. Proper fortifications are essential for successful military campaigns in the 19th century, and an understanding of engineering was needed to build these structures. The Russian Empire gained much from the aid of engineers in order to retain its place as a great 19th century European power. Meanwhile in the rest of Europe... As the industrial revolution pressed on, the powers of Europe saw advances in technology that fundamentally changed what was possible. In France during the 1860s, Gustave Eiffel began the planning for the Eiffel Tower. This work of engineering represents a true mastery of iron and steel as a medium for construction. Eiffel relied on his expertise as a railway bridge constructor. He applied his understanding of the physical composition of iron to build a 324 meter tall tower that has stood tall for over a century now. It is interesting to note that the Eiffel Tower was originally intended to be a temporary structure. The Scottish born engineer James Watt served England mightily by developing the steam engine that launched the industrial revolution. James Watt was well trained in mathematics by his father and he used this training to develop mathematical measuring devices. In homage to him, the modern unit of electrical energy is today called the Watt. Under the guidance of the British engineer George Stephenson, the railway system was invented in Britain. Stephenson was concerned with the speedy transfer of goods and people, and so hundreds of engineers were employed to design a railway system that focused on achieving public ubiquity with little care for aesthetic features. Engineers like Eiffel, Watt, and Stephenson created new industries through a strong


HISTORY / 21 understanding of mathematical models of physical systems, and in the process, added to the prestige of their respective nations.

recorded. Peter the Great pushed for Russian industrialization and in the process laid the groundwork for the professional development of the engineer.

The engineer in Imperial Russia The education system Engineering as a profession really began developing in Russia under Peter the Great, who understood the importance of technology for the future of Russia. He worked ceaselessly to bring Russia out of the middle-ages of an agrarian economy and to begin the development of new factories and fostering of new sciences. By recruiting foreign experts in technology, while sending Russian students abroad in Western Europe, he began expanding the scientific understanding of the Russian craftsmen. This policy of exchange eventually became critical to the success of Russian Engineering as the centuries progressed. Peter understood that lasting institutions were needed to continue the scientific experimentation needed to for-

At the end of the 19th century, only 21 per cent of the Russian population was literate and the number of schools and universities in Russia experienced little increase. The deficiencies of the Russian educational system are best viewed in comparison to its contemporary European neighbors. Russia had slightly fewer universities graduating fewer students than Prussia, which only had a forth of Russia’s population . The deficiencies of Russian elementary education were even more colossal. Russia only had only one-fourth of the output of the Prussian elementary education system. Imperial Russia was not fostering the educational environment needed to produce intellectual professionals like engineers.

advise its construction. Russia would need engineers to develop its industries, but it would be a long time before the political climate would change in order to bring this about. Importing talent Left without a class of well trained engineers, Russian achievements in engineering came instead from a select few exceptional individuals. One such individual was Agustin Betancourt. He was recruited to Russia from Spain in the early 19th century after he had helped establish the School of Road and Channels in Spain. Betancourt advised the construction of many buildings and invented new apparatuses for industry, but his greatest accomplishment was the founding of the first advanced Russian school of engineering, the Institute of Corps of Engineers of

Figure 1: The Winter Palace in St. Petersburg

malize engineering. In 1723, Peter decreed that a college of manufacturing should be built. This institution worked to increase the number of manufactures and factories in Russia, and its rigid organization is a precursor for the modern organization of engineering faculties and firms. Additionally, Peter established the Academy of Arts and Sciences which espoused the utilitarian views of engineering. The scientific achievements of the Academy were intended to aid in the construction of ships, mining operations, and better artillery. Engineering is only able to develop well if given a structured environment in which scientific developments can be exchanged and meticulously

Meanwhile, the few schools that did exist suffered as they attempted to teach about the modern world. Tsar Nicholas I administered harsh control of the universities via Shikmatov, and this led to major decreases in university attendance. It was not possible to maintain the large number of engineers needed to support an empire while systematically remaining hostile to mass education. Russia suffered from this policy. Prior to the Crimean War, Russia had only 1,000 kilometers of railroads available, compared to the 10,000 kilometers of railroad that existed in Germany at the time. One cannot possibly build massive railway systems without hundreds of engineers ready to

Routes of Communication. Betancourt instilled in this institute the proper values of modern engineering, which greatly helped to develop the identity of the engineer as a profession throughout the 19th century. The Institute of Corps of Engineers of Routes of Communication focused on training engineers that could immediately become productive upon graduation. This required hands-on application of skills as well as a structured understanding of contemporary engineering theory. To help facilitate this, Betancourt continued the Russian tradition of importing experts by importing experienced French engineers to lecture at the institute. Betancourt’s institute helped to facilitate future technological successes in Russia because it


22 / HISTORY understood the responsibilities of engineers. Structured education was needed to ensure that the sophisticated buildings and machines of the 19th century could be operated efficiently and safely over the long term. This focus on safety and performance was a major underpinning in the training of modern engineers.

Engineering organizations showed interest in engineering as a profession, and sought to expand this profession in Russia by finally ending the long tradition of importing considerable foreign talent. Russian engineers became more socialistic in response to waning of imperial power.

What’s more, the foreign industrialist Charles Gascoigne was imported from Britain in 1789, against the wishes of British officials, and was put in charge of three major Russian iron foundries. Before importing Gascoigne, Russia forging techniques were lacking, and Russian foundries could not produce the high-strength steel that other European foundries were making. Gascoigne used his privilege position as overseer of a major section of Russian iron production to begin modernizing the industry. His expertise brought cast-iron technology to the industrial level in Russia, and this was put to use in building the ever larger and more advanced artillery pieces that supported the Russian army. Gascoigne collaborated with other foreign engineers such as Charles Braid to establish one of the few privately owned foundries in Saint Petersburg . The absence of significant numbers of well trained Russian engineers created a power vacuum that foreign talent was able to monopolize. This early stage in the development of the engineering profession emphasized the economic utility of engineering skills, and was concerned with the protection and exclusivity of engineering knowledge. The advantages of having a large number of skilled engineers had not been realized by the empire. The close of an era As the 19th century came to a close, the engineering profession in Russia had started to advance rapidly. During the revolution of 1905, political upheaval led to an expansion in the membership of engineering organizations, and began a movement to organize all engineers together in Russia. These organizations focused on the exchange of scientific ideas as well as the practical matters of defining engineering as a profession within society. The emerging and important branch of electrical engineers was particularly politically active in the formation of engineering societies. The neoteric goal was now to unite all engineers in Russia in one “All-Russian Congress of Engineers.” Throughout the 19th century, the organizing of engineers was delayed by an imperial bureaucracy that feared the potential threat to power that a unified and intelligent group might pose.

Figure 2: A 19th-century photochrome print of the Alexandrinsky Theatre in St Petersburg. (Source: United States Library of Congress)

A pragmatic view of engineering The organizations that formed after the 1905 Revolution rapidly accelerated the formalization of the field. Engineers now sought the stringent requirement that all engineering theory be based on experimentally-validated science.

the construction of massive engineering projects. Conclusion Engineering in the Russian Empire was complex. Russia was a huge empire and so educating its people was difficult. This did not mean that Russia would not industrialize. Peter the Great brought western engineering experts to Russia, and this began a long legacy where by modern engineering practice emerged in Russia. Great individuals appeared throughout Russian engineering history, who changed its course according to what they fit as the best way to practice engineering. Modern engineering practice uses the empirical evidence obtained from scientific experimentation, and builds mathematical theories for how the world operates. Based on these theories, designs are prepared in mathematical terms for how to complete a task at hand. These designs only have value if they can be put into practice. To engineer successfully, one must remember to consider the environmental, safety, and economic implications of their work with respect to a humanistic moral framework ■ References

Young engineers that had just graduated began to put the field of engineering before their own interests. They demanded ethical change within the profession of engineering and the results of their campaign materialize in the early days of the Soviet Union. Russian engineers like Vladimir Shukhov were now making revolutionizing contributions to the field of engineering. Shukhov designed the world’s first hyperboloid structures, and he ensured the legitimacy of his designs using new methods of geometric structural analysis that he pioneered. Bringing structure and organization to the field of engineering allowed its advance to an art form, and facilitated

Balzer, HD. “1905 and the growth of Professional Organizations.” Russia’s Missing Middle Class, 1996. Baumann, RF. Reforming the Tsar’s Army., 2004. Chapman, T. Imperial Russia 1801-1905. 2001. Cracraft, J., ed. Major Problems in the History of Imperial Russia, 1994. Egorova, O. “Agustin Betancourt and His Contribution to Higher Engineering Education in Russia”. 12th IFToMM World Congress Paxton, John. Imperial Russia, a Reference Handbook, 2001. Saint, A. Architect and Engineer, A Study in Sibling Rivalry, 2007. “Totleben, Eduard Ivanovich.” The Modern Encyclopedia of Russian and Soviet History, 1985.

Did you know? We accept articles year long.

Send your articles to

articles@technophilicmag.com


JOKES / 23

XKCD Comics

by Randall Munroe, XKCD.com, CC License

In-House Comics by Sze Chung Mui

This issue, we’re introducing our very own comics. What do you think? E-mail us your thoughts, suggestions and ideas at ideas@technophilicmag.com.


Did you know? We accept articles year long.

Send your articles to

articles@technophilicmag.com

2 2


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.