System Reboot

Page 1

System Reboot Ezine Fall 2018

Page 8

History of Computers Page 14

Will VR Treadmills Run the Future? Page 22

Breaking the Sound Barrier Page 28

A “i, Robot”

Fall 2018 | 1


2 | System Reboot


Fall 2018 | 3


Table of Contents The History of Computers pgs. 8-11 Major Advancements in Computer Technology Timeline

Advancements in Computer Technology Timeline pgs. 12-13

1943: First official programming language, called Plankalkül, is developed by Mr. Zuse. It does not get completely implemented until 1993, so its’ status as the first computing language is disputed (1943).

Photo courtesy of Apple Screenshot on the first Apple computer running the origional Mac OS

1969: First prototype of parallel processing is developed by Honeywell.

1984: First Mac OS is released.

Picture courtesy of http://history-computer.com

1900

1910

1920

1930

1940

1950

1936-38: The first computer capable of programming is invented by Konrad Zuse . He named it the Z1.

Replica of the Z1 built in German Museam of Technology in Berlin Photo courtesy of ComputerGeek

12 | System Reboot

1947: The transistor is invented by William Shockley, John Bardeen and Walter Brattain. Left: Replica of the first transistor built to commemorate its’ 50th anniversary

1960

1970

1990

1980

2000

2010

1985: Windows 1.0 is released (November 20)

1976: First application of Parallel processing (computers doing multiple tasks at once)

Photo courtesy of Microsoft Screenshot of Windows 1.0

1949: The first high-level programming language is developed by John Mauchly. It was called Short Code. Even though it was revolutionary, it had slow processing speeds. Fall 2018 | 13

Will VR Treadmills Run the Future? pgs. 14-17

So You Want to Build Your Own Computer? pgs. 18-19 4 | System Reboot


Breaking the Sound Barrier pgs. 22-25

Is Your YouTube Video Accessible to Deaf Viewers? pgs. 26-37

A “i, Robot” pgs. 28-31

Science Fiction in Real Life pgs. 32-33

Fall 2018 | 5


A LETTER FROM THE EDITORS DEAR READERS, It’s difficult to write about technology because it’s such an expansive and constantly changing topic; under this umbrella falls everything from the latest iPhone to stone tools from Paleolithic times. It spans the beginning of human civilization and has been present in every era, advancing society. To narrow it down, we decided to write about technology that’s important to us and our readers. If you’re a member of the disabled community, as 48.9 million people in the U.S. are, you know that advancements in technology can change lives and stir controversies. If you use a computer or phone, you’re exposed to artificial intelligence every day, and you might be wondering just how it works. If you’re a content creator, you could be curious about how to reach a wider audience online. In this magazine, you’ll read about all this and more. We hope you put it down feeling optimistic about technology and just where it will take us next.

SINCERELY,

6 | System Reboot


WHO ARE WE?

14-year-old freshman Tina Carter enjoys ancient history, mythology, and forensic, astronomical and psychiatric sciences. On the rare occasion she has free time, she enjoys reading and drawing.

Sarah Appel is a 14-year-old journalist-in-training and is passionate about disability advocacy. She has been hard of hearing since she was four. In her spare time, she enjoys watching The Office on Netflix.

Eddie Vane is a 15-year-old student at LASA who enjoys sleeping and playing video games. He is interested in computers. He is active in robotics and participates in the school’s robotics team.

Pasta enthusiast Pike Murphy is a 14-year-old who enjoys cooking, watching TV, and hanging out with his dog Leska. He likes learning about computers and even built his own computer tower.

Fall 2018 | 7


The History Of Computers Computers went from taking up entire rooms to fitting in our pockets in less than a century. How? By Eddie Vane

C

omputers are one of the most influential advancements in the human history. They allow people from opposite sides of the world to exchange information in the blink of an eye. They let us calculate complex problems faster than ever before. They store immense amounts of information in our pockets. Despite how advanced computers are now, they weren’t always this way. A mere 80 years ago, computers could barely store anything, were very slow, and took up entire rooms. Computers got their start in 1936 in Germany, when a young man named Konrad Zuse developed his Z1, which is now considered the first modern computer. According to Zuse, his computer weighed about 1,000 kilograms (2,205 pounds). The Z1 is recognized as the first modern computer because it was the first computer capable of being programmed. Despite being the first modern computer, it contained comparatively primitive versions of all the parts still used in computing today. Zuse spent the next few years developing a language to code his invention, and in 1943 he had completed his project. Plankalkül, as he called it, is considered by some to be the first programming language. 8 | System Reboot

Photo courtesy of TechRepublic The reason that Plankalkül’s place in history is disputed is because it wasn’t fully implemented in a computer until 1993, so engineers debate whether or not its development or implementation counts. The next big step in computing technology came in 1947, when a team of three computer scientists named William Shockley, John Bardeen and Walter An image of early Plankalkul code.


Brattain developed the transistor, which is considered “fundamental” to the development of computer transistors are semiconductors that can be used to amplify signals. They allowed computers to be much more powerful. To this day, computers still use transistors based on the original design. After transistors hit the computer technology scene, advancements really started to speed up. In 1949, what is considered the first high-level programming language was completed. John Mauchly developed and released what he called “Short Code,” which was much A replica of the first transistor

more powerful than Zuse’s Plankalkül. According to Bill Johnson, an engineer at Indeed, the invention of coding languages is one of the most important advancements in computing technology. He said that they were parallel processing, which is also integral to computers, was developed in 1969 by a company called Honeywell. Adi Miller, a recent graduate of electrical engineering and computing at the University of Texas at Austin, said “I think the most important change was parallelization of processes, meaning multiple processes were able to run on a single CPU. This innovation increased efficiency of a computer and significantly reduced the time it took for a computer to complete a task.”

Because of the parallelization of processes, Honeywell’s computers were able to perform actions much faster than anything before it, and computers would continue to speed up exponentially over the next few decades. In 1984, Apple released their first operating system. This was followed Microsoft’s first operating system, Windows 1.0, in 1985. The first Windows operating system was heavily influenced by the first Mac OS, and both were 16-bit operating systems. Both companies continued to release new updates regularly. Throughout that time, computers were increasing in power and decreasing in size. Perhaps the most iconic piece of hardware became known on January 9, 2007, when Steve Jobs announced the iPhone. The iPhone was later released and was a resounding success. The first generation

Above: Screenshot from Mac OS 1.0. Photo courtesy of Apple. Below: Screenshot of Windows 1. Photo courtesy of Microsoft.

Fall 2018 | 9


The first generation of the iPhone was a revolutionary device. Photo courtesy of iFixit.

iPhone weighed only 135 grams, which is 7,407 times lighter than the Z1 and amazingly more powerful. The iPhone and other phones like it continue to push the boundaries, becoming smaller and more powerful with each generation. While they’re focused on going small, others are going big.

that never breaks is the line of code you didn’t have to write,” Mynatt said. In other words, the fewer lines we have to write to get the same result means we can make better programs faster than ever before.

Some companies make computers the same size as the computers from the 40s and 50s, but immensely more powerful. In 2016, the most powerful computer ever, called Sunway TaihuLight, was activated. It has 1.31 petabytes of memory, 20 petabytes of storage, and uses 15 megawatts. It cost $273 million to build. Despite the many achievements in the history of computers, scientists are still looking to push the envelope. According to Jim Mynatt, a senior manager of AppleCare OS Support Engineering, within the next five to ten years we can expect much more refined code. A more refined code will make developing new softwares much easier. “The line of code 10 | System Reboot

An example of modern Java code. Photo courtesy of Chegg.


Miller says that most companies are currently “going in the machine learning and artificial intelligence direction,” so we can expect smarter machines in the near future. So why have these advancements not come already? With the blistering pace computer technicians seem to be improving technology, why don’t we have things like true artificial intelligence yet? The main problem computerscientists run into is that one part of computing technology is advancing faster than the others. Miller says that computer software is getting so advanced that the hardware we currently have isn’t powerful enough to run it. Miller said that the main way she overcomes this obstacle is by simply adding more space, power, and GPUs (Graphics Processing Units) to allow the computer to run the programs. The increase in software power also makes it hard for teams to work on a program together.

how hard it is to create large pieces of software with many people working on them together. It’s pretty easy to make a small app where one person knows how every piece of it works, but building a large application like a large website or an operating system is very difficult.” He says that the main strategy they use to bypass this challenge is “breaking [the code] into smaller pieces, and having the small pieces communicate with each other.” People will look back on the invention of computers in much the same way we look back on the invention of the wheel—it’s a device that irrevocably changed society in such a way that the world would be a much different place without it. Despite the challenges we face, computers continue to get faster and more powerful. Even 80 years after its invention, they continue to revolutionize our lives in ways we won’t be able to understand for years.

Bill Johnson said, “I think the biggest problem is Below: A picture of the TaihuLight supercomputer, the most powerful computer in the world. Photo courtesy of New Atlas.

The TaihuLight has 1.31 petabytes of memory, 20 petabytes of storage, and uses 15 megawatts of power. It cost $273 million USD to build. Fall 2018 | 11


Major Advancements in Computer Technology Timeline 1943: The first official programming language, called PlankalkĂźl, is developed by Konrad Zuse. It does not get fully implemented until 1993, so its status as the first computing l anguage is disputed. Photo courtesy of history-computer

1900

1910

1920

1930

1940

1950

1936-38: The first computer capable of programming is invented by Konrad Zuse. He named it the Z1. 1947: The transistor is invented by William Shockley, John Bardeen and Walter Brattain. Replica of the Z1 built in Berlin, at the Museam of Technology, Germany. Photo courtesy of ComputerGeek.

12 | System Reboot

A replica of the first transistor built to commemorate its 50th anniversary. Photo courtesy of Inventing Europe.


y

Screenshot on the first Apple computer running the origional Mac OS. Photo courtesy of Apple.

1969: The first prototype of parallel processing is developed by Honeywell. 1984: The first Mac OS is released.

1960

1970

1980

1990

2000

2010

1985: Windows 1.0 is released on November 20. 1976: The first application of Parallel processing (computers doing multiple tasks at once).

Screenshot of Windows 1.0 Photo courtesy of Microsoft.

1949: The first high-level programming language is developed by John Mauchly, called Short Code. Even though it was revolutionary, it had slow processing speeds. Fall 2018 | 13


Will V.R. Treadmills Run the Future?

Virtual reality could be getting an upgrade, but those in the community say it’s probably not

going to happen anytime soon. Image courtesy of Infinadeck

The omnidirectional treadmill

is a treadmill with a twist. Not only can it go forward and backwards, but side to side and diagonally as well. These treadmills, used for virtual reality gaming, have long been regarded as a money-waster. More recently, however, the treadmills have undergone a redesign, putting them back in the mainstream gaming sphere.

The new developments put into these treadmills has been game changing for the industry. New companies such as Infinadeck have attempted to achieve perfection with these devices, striving not only to make them work smoothly but also to be readily available. 14 | System Reboot

The HTC Vive virtual reality headset, with controlers that alow interaction with the virtual reality. Image courtesy of HTC

Unfortunately, most of these companies face numerous challenges blocking their path to V.R. glory. One of the first and largest problems these treadmills face is price. As the tech in omnidirectional treadmills increases in quality, the price per product increases. Most treadmills can go for upwards

two to four thousand dollars. The hefty price tag is offputting to many virtual reality users, such as V.R. owning redditor u/Mr_Boourns, who said, “I’d get one myself if they weren’t so … expensive. [T]hey’re about $1.5k … out of my budget.” While most people who own V.R. devices typically have a fair sum of spending money, most still do not have enough extra cash to pay for a device that has very few uses so far. In addition to price, omnidirectional treadmills still don’t have enough public traction to have garnered a satisfying amount of games compatiable with the device to make it a worthy purchase.


A user playing a game in virtual reality while running on the Virtuix Omni V.R. treadmill. Image courtesy of Virtuix

to make games for such devices because of their price and low probablity of profit. This problem particularly repels indie game developers and small companies because of the high investment price and the fact that profit could determine how long that company could stay in business. Because of the relative rarity of the treadmills, the games containing support for the device is very limited. This obstacle of limited selection is the same that occurs in console gaming; when a new console is released, it doesn’t have many games available in comparison to the P.S. 4 or Xbox, for example. The game restriction makes the treadmills less appealing at the moment, but as they go up in popularity, the treadmills should collect a larger brand following, meaning more future games to come. As most regular treadmills are, the omnidirectional treadmills are fairly large. The size is a problem to consumers because the average person doesn’t have room to fit a device of its size comfortably in their house. The issue of size makes the treadmills hard to find and

purchase, but if the tech is developed to make a consumer- sized platform, it may boost commercial sales and availability. As mentioned, the tech in these treadmills is not quite at the flawless levels of widely available computers like smartphones, which have been buffed to nearly bug-less levels. Some of the problems they face include a slow reaction time (because the device has to predict where the user is going to move and how fast it’s going to perform said action); fitting good enough motors and computers into a reasonably sized platform so that it functions properly; and making such tech compatible with existing devices. In the realm of game development for omnidirectional treadmills, many developers are not be able

An early version of a Virtuix Omni being stood on. Image courtesy of Virtuix

Fall 2018 | 15


The KatVR omnidirectional treadmill in use with full body tracking. Image courtesy of KatVR

Youtuber SmarterEveryDay using the Infinadeck omnidirectional treadmill at the Infinadeck headquarters along with foot tracking hardware and an HTC Vive headset. Image courtesy of SmarterEveryDay

16 | System Reboot

A game developer working for a small-teamed business, Daniel (whose last name has been omitted for privacy) describes how his company could not hope to use one a nytime soon, explaining, “The cost of a [virtual reality treadmill] is way too high, and we could not expect any of out consumers to have one.” He further elaborates, saying, “A company would have to receive an investment for the full cost of developing a game for those platforms in order to be feasible.”

At the moment there are very few add-ons to virtual reality besides extra distance cameras and full body tracking, so any new innovative accessory will most likely sell quite well as long as it’s actually attainable.

On the other hand, however, if these treadmills overcome these obstacles, many think they could be the next major advancement in virtual reality.

It is expected that major brands as Google, Apple and Microsoft will most likely not get involved with omnidirectional treadmills

As Playstation Virtual Reality owner Donovan put it, “It has the possibility to become the next big advancement if [the companies] do [marketing and improvements] properly,” but there is some worry as to how big brands’ involvement with the devices will play out.


because their very niche commercial community. The iMac, on the other hand, as a wide audience of potential users.

by the numerous obstacles that stand in its way. The treadmills risk losing interest the longer much-needed improvements take to develop.

While anything is possible, big brands have a greater incentive to focus on products from a sales point of view, so the chance is quite low, as only people who have thousands of dollars to spare, have enough room, and are interested enough will buy the treadmills, whereas a mobile phone will sell hundreds of thousands of units almost instantly.

Still, some think that the treadmills have a fighting chance. If companies are devoted to improving their products while working to maintain public interest, then V.R. treadmills could just be the next must-have device. As we have seen with past V.R. innovations, it often takes time and lots of redesigning before the product is affordable and catches on with consummers, but other V.R. technology has become popular in the past half decade so it stands to reason that this will too.

All in all, it’s believed by most in V.R. community that the omnidirectional treadmill has lots of potential but a small market that is further hindered

“I feel like if V.R. treadmills catch on, people will have more freedom and people will say it’s worth the one, or more, thousand dollar price tag. Seriously underated technology.” —Redditor u/Mr_Boourns Fall 2018 | 17


c

SO YOU WANT TO BUILD A COM

here’s what parts to g

The cooling system is something else you’ll need, whether it be fans or liquid cooling. It keeps your computer from overheating. The first thing you’ll want to pick out when building your computer is the case. The case will keep the more sensitive parts of the computer safe. Plus, it will be the part everyone sees, so pick a good one.

The power supply is the next most important item to get, as it powers your whole computer tower. Power Supplies typically don’t need to cost more than $750 for a typical home computer, but of course quality increases with a higher cost.

The graphics card will determine the quality of games and videos. It will also dictate which of the new games you can play, so I recommend getting the second or third newest tier of quality.

18 | System Reboot


MPUTER?

get.

The motherboard holds and accesses most of your other parts, save for the power supply and cooling system. Most motherboards look pretty similar, but if you have some extra cash, you can shoot for ones that have customizable lights.

The CPU will act as the brain of your PC, sending control signals to everthing in and attached to the motherboard.

Ram acts as the memory of a computer, determining how much you can do at a time.

Getting a wifi chip will be very helpful, as you can’t download any programs or games on your computer without them.

Photos courtesy of (a) Corsair, (b) Nvidia, (c) Intel, and (d) Edimax

Fall 2018 | 19


Hearing devices such as cochear implants have stirred controversy in recent years. Photo by Jennifer Binford.

20 | System Reboot


BREAKING THE

SOUND BARRIER New technologies advance accessibility while raising questions about identity and community BY SARAH APPEL

Fall 2018 | 21


WORDS FOR THE WISE Deaf Culture: The term “deaf culture” refers to the customs, language, and societal norms of those who identify as part of the deaf community. ASL: ASL stands for American Sign Language, and it is used in the United States. It is not a signed version of English; it has a separate grammar structure and syntax. Assistive Technology vs Accessible Technology: The term “assistive technology” refers to technology developed specifically to help a disabled person complete a task (for example, hearing aids to help someone hear certain pitches that they couldn’t otherwise). “Accessible technology” refers to devices that are designed so that a lot of different users can access them (for example, the iPhone). 22 | System Reboot

Sitting at her dining room table, audiologist Sara Morton adjusts the silver cylinder that hangs around her neck. A blinking green light nears its edge pulses rhythmically. Satisfied, she rests her hands on the table, ready to field questions. To many, this would be a strange sight. To Morton, it’s all part of the job. What she’s wearing is the Roger Pen, a microphone that transmits her voice directly to my hearing device. Advancements in accessible and assistive technology for the deaf and hard of hearing, such as the Roger Pen, have been exponential in recent years— from the vibration feature on Apple iPhones to the latest models of cochlear implants. As the disabled community in America grows more visible and vocal, companies are scrambling to meet the demand. For those with hearing problems in particular, accessible and assistive technologies have strirred controversies and ignited conversations about the intersection of identity and technology. The Roger Pen isn’t the only advancement in technology for the deaf and hard of hearing that’s been made in recent years. Many deaf and hard of hearing individuals use everyday technologies like the iPhone to adapt to a hearing world. “I have patients... that use their iPhones as an alarm clock because they have that vibration feature,” Morton said. Referring

to the earlier days of technology, she noted that “there was no texting [then]—texting is huge.” That’s a sentiment that Alisa Nakano, a deaf teacher in the Austin Independent School District, can relate to. She’s been deaf since birth, and says that texting was a major development. Instead of using TTY services (an ardous communication service for the Deaf), people can now use messaging apps and Facetime, making it easier for those who can’t hear over phones or those who use sign language rather than a spoken language. In terms of technologies made specifically for the deaf and hard of hearing, it’s only gotten better over the past few years. Highpowered hearing aids, advanced microphones and cochlear implants have been among the standout technological advancements in recent years, said Morton. So how do deaf and hard of hearing individuals benefit from these technologies? “The one thing we know for certain is when people start to experience hearing loss, they retreat from their social interactions. They retreat even from their families,” Morton explained. “If they were very active in the community, they’ve stopped, because [at] some of those meetings they can’t hear... they become very isolated.” She used the example of those who sit on the board of their homeowners association. She


says that these situations can be very hard to hear at because multiple people are talking at once, often around a long table that makes it difficult to read lips. Assistive technology—like the Roger Pen she’s wearing at the moment—can help people hear in situations that would otherwise be much more challenging. “They become more social again, and more active in their communities, and [do] the things that make them happy,” Morton said. Technological advancements aren’t useful only for adults in the workplace or on neighborhood boards—they’re prevalent in classrooms, too. For students, the reliance on assistive or accessible technology is a common—and frustrating—necessity. Static-prone FM systems, finicky microphones and irritatingly inaccurate automatic captions are among the hallmarks of the school experience for deaf and hard of hearing students, especially those in mainstream public schools.

Stephanie Cawthon, a professor at the University of Texas at Austin and the director of the National Deaf Center on Postsecondary Education, said that students and teachers shouldn’t rely solely on technology to help them succeed in the classroom. To get the balance between technology and teacher awareness right, she noted that there are a variety of strategies to do so, stressing the importance of having information represented visually. Posters and photos are good examples of learning materials that help students synthesize information visually rather than audiotorally. She warned that “[we] need to use the technology and common sense we already have before before [we] care too much about new fancy technologies.” However true that may be, Morton said that technology for the classroom is improving. Tech companies are realizing the importance of students being able to hear not just teachers but also peers as more group work is

introduced to the classroom. “The newest microphone right now on the market is the Roger Select, which has taken everything about the Roger Pen and enhanced it,” she said. “When [students] are in a group situation that’s very noisy, their microphone is still directional.” The Roger Select has the ability to pick up six different voices at once while cutting down on background noise, a factor that could mean the difference between a successful group project and one full of misunderstandings and roadblocks. Of course, not all accessible or assistive technology has been greeted positively by the deaf and hard of hearing community. In an article titled “Why Sign-Language Gloves Don’t Help Deaf People,” Michael Erard, a writer-inresidence for the Atlantic, addressed the hubbub that’s been surrounding so-called “ASLtranslating gloves.” “For people in the [d]eaf

The Roger Pen is one of the many devices made for the deaf and hard of hearing community. Photo by Sarah A.

Fall 2018 | 23


Those in the deaf or hard of hearing community use many devices to adapt to a hearing world. Photo by Sarah A.

community... the sign language glove is rooted in the preoccupations of the hearing world, not the needs of [d]eaf signers,” he wrote. Because the gloves only focus on one part of ASL (what the hands do), they “misconstrue the nature of ASL (and other sign languages).” Equally important to understanding sign language are facial expressions, the turn of the torso, or the shape of the mouth. Moreover, many deaf activists argue that “though the gloves are often presented as devices to improve accessibility for the [d]eaf, it’s the signers, not the hearing people, who must wear the gloves, carry the computers, or modify their rate of signing.” Many people say that this demonstrates why the deaf and hard of hearing community need to be involved in the design of new technology.

24 | System Reboot

“It’s important for deaf people to be involved because it’s their culture; it’s their language,” Nakano said. She used the example of an English-speaker making a Spanish app. Of course the developers would want to consult with native Spanish-speakers and get their input, she said. That’s just common sense-—so why should it be any different for sign language? “They’re the ones using the technology,” Morton agreed. “Users of the technology... would be able to speak to what works and what doesn’t work.” The heart of the problem, Nakano said, is that too often, devices for the deaf and hard of hearing are thought up without the input of these communities. It’s clear that there has to be a shift in the perception of how developers view disabled communities—not as an object

of pity or as unable to assist in the design process, but rather as valuable sources of input involved at every step of the process. On my left ear, I wear a pink hearing aid with a swirling green and white mold. Most people are familiar with this technology—many have family members who use them. Lesser known is the metallic blue device I wear on my right ear; it looks like something out of a sci-fi movie, with a circle seemingly stuck on my head. The cochlear implant is another type of hearing device, and it works differently than a hearing aid. It takes sound and turns it into electrical signals, which then travel down an electrode to the cochlea. The electrode in the cochlea sends electrical signals to the auditory nerve, and the nerve carries signals to the brain. The cohclear implant requires a surgery and intensive


brain-training for one to be able to recognize electrical signals as sound. However miraculous this technology may seem, it’s important to understand a few key points. First, it doesn’t work for all deaf and hard of hearing people, and when it does, it’s not an all-encompassing fix. It can take years— particularly for deaf adults like Nakano, who had cochlear implant surgery last year—for people to learn how to translate the electrical signals into sound.

parents feel that a cochlear implant is the only choice they have, and they feel pressured to have the surgery early on so that a child’s spoken language skills develop at a normal pace. “A lot of children have implants, and that’s not the children’s

from parents. “[Parents are worried] that the identity of the deaf child... is being changed or forced in a different direction,” she said. That’s not the case, she reiterated. “We’re not changing a person’s identity. That’s never the goal. The goal is to provide the person with whatever they need— if they need additional cues, if they want it for safety.”

“There has to be a shift in how people perceive those who have hearing loss; if that shift can happen, the accessibility part will follow.”

Secondly, and perhaps most important to understand, many deaf people see the cochlear implant as a potential threat to deaf culture. ASL has always been seen as inferior to spoken language—at different points in history, it’s been banned altogether. Many deaf people today feel that cochlear implants are a product of hearing people trying to “fix” deaf people, just as they have in the past with the use of oralism, a technique that emphasizes spoken language and lip-reading over sign language. It’s also necessary to realize that more than ninety percent of deaf children are born to hearing parents. For this reason, many

choice, it’s the parents’ choice,” Nakano said. Before parents make the decision for their child to get a cochlear implant, Nakano urged them to understand the “deaf side of it.” In an article for the Atlantic, Allegra Ringo wrote, “Because [d]eaf children who receive cochlear implants at a young age will likely be educated in the oralist method, they are less likely to learn ASL during their early years, which are the most critical years of language acquisition.What may seem to a hearing person like an opportunity may be seen by some [d]eaf people as a loss.” Some deaf parents are weighing the pros and cons of their children getting implanted. Morton hears many concerns

For deaf adults, the decision to get a cochlear implant can be a long and emotionally charged process. Morton has a few patients who grew up a part of the deaf community and eventually decided to get implants. Their choice involved sometimes years of indecision, research, and turmoil before “they really came to the point [where they realized], ‘I can still be a part of the deaf community and choose to have implants,’” Morton said. Setting aside cochlear implants, Morton thinks that “there has to be a shift in how people perceive those who have hearing loss; if that shift can happen, the accessibility part will follow.” However much technology is advancing, it’s essential for people’s mindsets to evolve with it.

Fall 2018 | 25


Is Your YouTube Video Accessible to Deaf Viewers? Do you have professional (not automatic) captions on?

1

The most important part of making a video accessible is adding captions—and not automatic ones, either! The captions should reflect exactly what is being said in the moment.

Is your face angled toward the camera?

2

Many deaf and hard-of-hearing individuals rely on lip-reading to understand what’s being said. Make sure that you’re facing the camera so they can see your mouth.

Is there minimal background distraction?

Deaf and hard-of-hearing viewers need to focus on lip-reading, the captions, and any auditory information they can pick up. This is made easier by the absence of visual and auditory distractions—so try to cut the background chatter and choose a wallpaper that doesn’t distract from your face.

4

3

Is there a transcript?

If, for some reason, you can’t put captions on a video, a transcript in the description box is the next best thing. Don’t link the transcript or lyrics to a different web page—this completely takes your viewer away from the lip-reading they need to do.

Remember...

You should strive to make your videos as accessible as possible in order to reach a larger audience. This guide is only the beginning! How will you make your videos understandable to blind individuals? Those who have auditory processing disorders? It’s time to get to work!

26 | System Reboot


Image courtesy of ClipartXtras.

3 2

1

Today we’re going to learn about deaf accessibility! CC

Is Your YouTube Video Accessible to Deaf Viewers? 6,875,552 views

System Reboot

SUBSCRIBE

10,000,000 people in the U.S. are hard-of-hearing, and close to 1,000,000 are functionally deaf. So it’s important to make sure that platforms like Instagram, Snapchat, and YouTube are accessible to those who need extra help understanding what’s being said. Use this guide to judge whether or not your videos are deaf-friendly!

4

Fall 2018 | 27


28 | System Reboot


A “i, robot”

The bumpy history of AI, from science fiction to a real doctorate thesis option. By Tina Carter

H

umanity, it seems, is a fundamentally lonely species. After all, the oldest stories are about people finding other species or creating other sentient beings. Those stories are coming closer to reality with the refinement of AI, computer intelligence, making the possibility of a non-human sentience seem more and more possible with each passing day. Just five years after the end of WWII, Alan Turing concluded that if humans use reasoning and sensory perception to solve problems and make decisions, then we could create something that could do the same. Despite his paper on the subject, however, computers just couldn’t keep up with the imaginative ideas of computer researchers. From the invention of the computer until the latemid 20th century, computers didn’t have any storage, and thus no way to continuously execute even the shortest programs, much less one complicated enough to mimic human thought, Hans Moravec, a computer science graduate student in the 1960s and 70s, wrote in his thesis paper. They executed the commands as soon as they were put in, not developing reliable memory, instead relying on human memory to execute commands, until the end of the decade-- even then, though, the memory was limited. Unfortunately enough, memory—or lack thereof— wasn’t the only problem the idea faced; since computers were big, expensive, and maintenanceheavy, large universities were one of the few groups that could own and care for them. Fall 2018 | 29


if the system was introduced a task it couldn’t fit into its program, the machine was effectively useless. After the novelty of deep learning died off, researchers once again hit a wall, and AI development was, for the second time, left largely untouched for another two decades. The concept of ‘machine learning’—originally thought up in the 1950s with the initial rise of AI research, but not put into action until computer software caught up in the last decade—is one of the most popular computer science research topics that is studied. Photos (above and previous page) of the Conmuter in the IBM cloud reasurch facility. They often use zip-tie the wires to keep them contained, and name tests after Marvel characters (personal favorites are adjacent computer blocks labeled: “Thor III” and “Loki’s Back”). Photo By: Tina Carter. breakthrough in computer From the late 1950s to the mid software allowed for the comput1970s, the idea of AI really took ing of larger and more off. Computers became complicated algorithms, bringing cheaper and more efficient, about a new boom in AI, this time allowing programs for statistical with the revolutionary new condecision making to be stored and executed. Eventually, however, the cept of ‘deep learning.’ hype died down and the lack of computer’s computational power ‘Deep learning’ was the idea of caught up. Computers just didn’t have the storage or the processing taking a human and transmitting power needed to communicate. their decisions on certain topics into a computer. Experts from the designated field would be asked to Humans know many words and make decisions on specific subcombinations of them, but jects through binary options, ie: “computers were still millions of IF (a action) THEN (response), IF times too weak to exhibit NOT (initial action) THEN (move intelligence,” Hans Moravec said. to next statement). The initial breakthroughs slowed and so did funding, leaving AI This worked to a certain extent— computers could mimic the decilargely under-researched for the sions and actions of humans, but remainder of the decade. the process of programming all of the options was quite tedious, and Finally, in the early 1980s, a new 30 | System Reboot

Now that AI can, with reasonably high reliability, tackle foreign problems, it has an almost endless number of uses. “From photo identification to filling in missing information to mass data summarization, really anything that researchers think of is fair play,” an IBM computer scientist said. In a sharp contrast to ‘deep learning’, ‘machine learning’ programs don’t actually contain much information. “[AI] used to have quite a bit of intelligence—now though, it’s kind of dumb,” he continued. Unlike the IF-THEN-ELSE statements gathered from experts that was used in ‘deep learning’ programs, ‘machine learning’ uses statistics to put ‘weights’ on certain outcomes with a certain input.

Computer Vision object-detection on a shows detected people, cars, traffic Photo Curticy of U


Using visual inputs as an example: researchers start with huge databases of different photos of cars and a huge database of different photos of trucks. Each photo has a set answer of ‘car’ or ‘truck’. All the photos are run through the program where the computer gives an answer, ‘car’ or ‘truck’. At first it only has a 50/50 chance of getting it right, but as it runs through the system, each corect answer is given a ‘weight’, causing the program to ‘remember’ what the answer is; it isn’t really known what the weights are, just that they are in the statistics of the photos such that if a photo the program has never seen, of a car, is run through it will recognize it as a car. Due to the uncertainty of what the ‘weights’ are, some people are skeptical of AI’s abilities. “Some people just aren’t ready to trust [them]... [they] think that something could go wrong, and people won’t know why,” David DeCaprio, CTO of ClosedLoop, an AI healthcare company, said. “Frankly, they aren’t wrong, I just think that [AI] are reliable more often than not,” and that even without knowing exactly what’s going on, seeing the results he has seen are enough to convince him.

busywork. Katherine Grauman, a professor of computer vision research at University of Texas, says she is really excited about video summarization. It already has a foundation from computer vision. “[Computer vision] starts with a database of thousands of pictures… 1000 pictures of cats, 1000 pictures of cars, etc. all with a ‘correct’ answer… then the computer [trains itself]” to recognize the photos. Video summarization isn’t exactly a new either. In fact, the idea of video summarization has been around almost as long as videos; people will always be lazy and not want to watch them. The basic concept is pretty simple: two people watch ‘the same video’— one person watches a 45 minute video, and the other watches the summarized 10 minute video, but both people would get the same

information from the video, one just with 35 minutes of their life back. The idea’s shortcomming is, however, that summarizing videos is time consuming and tedious, so humans don’t want to do it. If a computer could work on it, though, videos could be summarized in mass without anyone getting board or slacking off. In the end, no one quite knows where AI is going, but so far it’s been an up and down rollercoaster, so maybe AI will lose popularity again for a few decades—but maybe not. Humanity may just be one turn away from being uploaded to the web. It’s entirely possible, however, that researchers will hit a wall, but it doesn’t look to be coming too soon. New programs and ideas are being tested every day, so look out for AI in your future.

Computer reasurchers are great at testing programs, but not so great at making neet wiring, as seen but the hodgepodge of velcrow and zipties holding the wires to the IBM computer reasurch lab in place. Photo By: Tina Carter

The more people look into it, the more they see—new uses for machine learning are cropping up every day. AI certainly has changed since the idea was first conceived, and even its goals have; from an online human intelligence to doing

an image of a road. This example c lights, and even handbags. UDACITY

Fall 2018 | 31


Science Fiction In Real Life Mary Shelly’s Frankinstein:

-----------------------------------------------------------------------------------------------------------

It’s a concept as old as life itself: Imortality. From the holy grail to demigods in mythology to Dracula and Camele, immortality was and still is a major motif in literature. Mary Shelly took this to a whole new level—instead of making some miraculous cure-all for death, her protaginist, the briliant, if slightly mad, Victor Frankinstein creats life from... dead body parts. We aren’t Victor Frankinstein, so the ability to reanimate the dead is still out of our reach, but we can take parts of recently dead bodies to save the dying. It’s called organ donation. It’s a relitively new idea that couldn’t succesfully be accomplished until anesthesia, blood typing, and organ surgery were all fully understood.

Jules Verne’s ‘Captin Nemo’:

-------------------------------------------------------------------------------------------------------

In the vision of Jules Verne, people of the future live under the waves; they can spend months, years, a lifetime in the big blue. Nemo’s submarine, the Nautilice, was named after Robert Fulton’s real-life submarine Nautilus (1800), but its design was more closely inspired by the French Navy submarine Plongeur. These days, we still don’t live under the sea, but the futuristic design of the Nautilice is very similiar to the modarn submarine, a ship that shifts of people spend, on avarage, 6 mounths at a time living in. Jules Verne may not have gotten all the details quite spot on, but he hit the nail pretty close to the head on what the ‘future’s’ submarine would be. Even the dimentions are simmilar to that of some US naval submarines, with an elliptical shape, a tank that fills with air of water to change her depth, and the 40 or so meter long interior tube all resemble the stats for long voyage submarine today.

32 | System Reboot

Drawings By: Tina Carter, Using Adobe Drawings By: Tina Illustrator Carter, Using Adobe Illustrator


The Magic of CGI:

-----------------------------------------------------------------------------------------------------------------------------

When watching Jurrasic Park, Lord of the Rings, Avatar, or any other modern movie with CGI elements, your first thought probably isn’t, ‘Look at all those lumpy computer generations.’ But that’s what the dinosaurs in Jurrasic park, Golumn in the Lord of the Rings trilogy, any of the creatures and people of Pandora in Avatar, and all the animals in the 2016 Jungle Book are—on the simpelest level, at least. Their wonderfuly realistic design and movement are the art of CGI and motion capture in action. “In 1993... Jurasic Park was the first ‘physicaly-textured’ CGI film, meaning [the] dinosaurs appeared incredibly realistic on screen” as opposed to smooth textured-characters like those in ‘Toy Story’. CGI isn’t a one-step process—it’s a long and complicated endevour. First prostetics are made, then they are digitally scanned; animators manipulate hand, feet and other movements to make movement as real as possible before texture and minute details are digitaly painted then pain-stankingly graphted onto the animated objects. In the early 2000s, “a fully rendered CGI character appeared on screen... face to face with other actors” in the Lord of the Rings movie. With the magic of a motion capture suit, animationrs were able to combine “traditional animation ...with artificial intelligence software to replace actor Andy Serkis’ movements with those of Gollum’s.” From the basis of motion capture, facial capture was used eight years later in Avatar, making all of the Pandora natives seem like real people despite their not-so-natural faces.

Star Trek: The Orignial Series:

---------------------------------------------------------------------------------------------------------------

So many things about Star Trek are still far beond our reach, and make entertaining visions of a possible future with seemingly magical technology and beyond-awsome alien encounters. However improbable it seems, Star Trek did give us some tech, one that seems outdated in today’s world—the flip phone. In the original series, all the members of the USS Enterprise have a comunication device called a transponder—a hand-held comunicator with a flip cover that gave rise to the clamshell shape of cell phones. Top Photo courtesy of: New Line Cinema R: Warner Bros. Botom Photo Curticy of: The Verge Fall 2018 | 33


34 | System Reboot


Fall 2018 | 35


Photo Curticy of NASA, who’s probes bring us to new hirisons, leting us all see the sun rize over earth from outThe sun rising over Earth, as seen from a NASA space telescope, with part of the side the atmosphere (the blue distortiona around earth). International Space Station blocking the light. Photo courtesy of NASA. 36 | System Reboot


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.