27 minute read

6.2 Bandi. Engineering Lead Portfolio

6.2 Bandi. Engineering Lead Portfolio 71

6.2 Engineering Lead Portfolio

Advertisement

By Kush Bandi ’22 & The Giant Diencephalic BrainSTEM Robotics Team

Note from the Editors

Below the Giant Diencephalic BrainSTEM Robotics Team describes their robot design used in the FIRST Tech Challenge competition. They won 2nd place at nationals.

Final Design Iterations

When starting the process of integrating all of the subsystems, we decided to go for a modular approach –every subsystem is its own piece inside the robot’s shell. This allowed for the construction of the robot to be quite simple, and we didn’t have to worry about subsystems attaching to each other. That said, with anything that has a simplistic design, a lot of effort is put in behind the scenes.

First, we have the drivetrain. Behind the final design lies the brainstorming power. At the beginning of the season, we desired to be able to fit within the 13.7-inch space between the barriers and the wall. This drivetrain is 13 inches wide, allowing for quick, easy cycles from the warehouse to the shipping hub. The wheels on this drivetrain were tested many times, but we ultimately decided on four-inch mecanum wheels, allowing for the best multi-directional mobility for the challenge. As shown in the image, the motors are placed in the back two-thirds of the robot, allowing for an opening for the collector to sit in. Finally, the drivetrain was significantly reinforced

72

with the use of carbon fiber rods throughout the entire design. This is a robust, compact, and easy-to-assemble design. Chapter 6. Computer Science and Engineering

that traps the freight in is opened, releasing it into the intended target.

Next is the collector. The first aspect of this design that allows our robot to exceed in collection is its inability to collect more than one piece of freight at a time – the distance from the intake to the storage is set to only carry a single piece. To collect the freight, the system uses surgical tubing that rapidly spins inwards to propel the freight inwards to be stored. When the time comes to transfer into the depositor, a powerful Long Robotics servo utilizes a spring to rotate the entire collector with little strain on the servo. When in its highest position, the gate is then lifted and propelled into the depositor by the tubing.

Then, the depositor takes control. When the freight is transferred from the collector into the depositor, it stays inside until the lift is ready to work. When the system is raised, a servo attached to the back of the depositor closes a lid on the box, and another servo flips the entire depositor 180◦. Next, a third servo extends a slide out seven inches using a linkage, the maximum distance the system is capable of moving. From this point, the flap

In addition, the capping system is located on the depositor. A fourth and final servo is connected to a hookshaped piece which holds our Team Shipping Element (TSE), a rectangular prism with netting on the top. Compactly mounted to our depositor flipping servo, when the depositor is flipped, the TSE is oriented so it is over the top of the shipping hub. Then, through a series of precise controls, we smoothly and accurately position the TSE above the shipping hub. Finally, the hook is released, and the TSE is placed perfectly on the hub.

6.2 Bandi. Engineering Lead Portfolio 73

Our lift system is imperative to quicker scoring cycles, and to make it as fast as possible, we implemented several strategies to make the entire subsystem exponentially more efficient. First, we wired the linear sliders to be cascading, meaning that each layer moves twice as much as the previous but requires more torque. Then, to reduce this torque, we use an innovative constant force spring. This spring neutralizes the weight of the lift and redefines its weight from 13 lbs to only 4 lbs, allowing us to use much faster motors to raise a lift that would otherwise need a motor of higher torque. Next, a REV touch sensor mounted at the bottom of the lift allows us to recognize when the lift is at its lowest position, and localize the motor’s encoders. This makes our lifting motion extremely accurate even over the course of an entire match. Finally, the entire component is made up of our signature FR4, and epoxy glass laminate, making the subsystem very rigid and consistent over countless robot runs.

The turret then utilizes a 180 tooth gear to rotate the lift and depositor in up to 290◦ of rotation. One main challenge of turning this entire system was the point of rotation. Finding a turntable that had little play yet was also compact was a challenge, but we ultimately discovered an IGUS slew ring, which was donated to our team as part of our sponsorship with IGUS. Mounting the top plate and the gear to this ring allowed extremely little play in a small amount of space, which is essential for lifting the depositor to the highest level of the shipping hub. We also utilize a limit switch to orient the turret at the beginning of every match, allowing for extremely precise movement.

Finally, our carousel system is utilized in both the

74

autonomous and end game periods. To allow for the maximum amount of surface area possible, our robot has two-inch compliant wheels lined across the full front side of the robot. Each wheel has a gear above it, allowing for the transfer of power. There are also two layers of these wheels, compensating for a possible variation of carousel height. Our system is also motor-driven, allowing for precise tuning to find the optimal speed to turn the carousel without knocking the ducks over. Chapter 6. Computer Science and Engineering

6.3 Machine Learning and the Art of Persuasion: Creating Digital Assistant for COVID19 Vaccine Hesitant Users

By Hannah Chang ’22

Objective

This proposal outlines the basic structure and principles of “Digital Assistant” that not only responds to requests for vaccine-related information but also tries to persuade people to change their attitudes and behaviors, specifically to convince the vaccine hesitant to receive the vaccine.

PART I: COVID-19 Digital Assistant Design Overview

What this Digital Assistant sets out to do is to provide factual information on the COVID-19 vaccines and persuade vaccine hesitant users to reconsider not getting vaccinated. The Digital Assistant will be connected to medical websites to answer basic factual questions from users, including information for people with specific medical conditions. Since many skeptics base their hesitancy on mistrust, the Digital Assistant will also operate with an emphasis on transparency. A disclosure of the vaccine development process will be provided as well as a scientific explanation of what happens to the body during a vaccination. Stats on percent of the vaccinated population and commonly reported reactions to each type of vaccine will also be reported. Representatives from local communities can volunteer to speak on their experiences taking the vaccine, as specific communities might have targeted concerns about the vaccine. Many African-Americans have a distrust of the medical system because of the racism embedded in the medical system, leading to suspicion about COVID vaccines [5]. Therefore, vaccinated people should share their experiences to instill the safety and effectiveness of vaccines, which are critical for survival. A list of vaccinated public officials, including politicians and religious figures, will also be displayed. Members from Hispanic communities reported that there is not enough information about the COVID-19 vaccination in Spanish [5]. To ensure that all communities are receiving valuable information, this digital assistant will be set in multiple languages. In addition to these basic functions, the Digital assistant will also have three specific features that will be discussed below.

Feature 1: Providing Personal Stories

Another powerful method of persuasion is uniting an idea with emotion. For example, an empirical study with experienced judges and attorneys showed that stories which evoked emotional responses actually created more credibility in the legal claims being made, which further created empathy from them in their judicial thinking and decision making thus affecting their rulings and decisions [4].

6.3 Chang. Machine Learning and the Art of Persuasion: Creating Digital Assistant for COVID-19 Vaccine Hesitant Users 75

The government can affect the public sentiment toward vaccination through an app that provides accounts of personal experiences with COVID-19, the vaccination process, and the reactions of vaccination, both positive and negative, reported by people of various age, race, occupation, location, and political ideology. This way, users can gain insight from people in the same community as them who may have reflected the same concerns that they currently have. This feature can be presented in text, audio, or video format in a casual manner to create the most authenticity.

Feature 2: Resolving Misconceptions

With the current usage of social media platforms, people have been creating and sharing mass amounts of information–fake or real–about COVID-19 vaccines. Vaccine skeptics reported that one reason for their hesitancy on getting vaccinated is that they are unable to identify which information is correct. Conspiratorial thinking is a major contributor to vaccine skeptics’ hesitancy as it can provide comfort and stand as “a way to get one’s bearings during a rapid change in the culture or the economy, by providing narratives that bring order” [5].

One way to counter false information and misconceptions about COVID vaccination in the online resources is using AI Digital Assistance. When the user reads an article related to the COVID-19 vaccines, the Digital Assistant can scan the text and give a pop-up notification if it detects false information such as incorrect statistics or conspiracy theories; it can also provide the information approved by governmental health agencies with source references. Through Digital Assistant, people including vaccine skeptics will be able to weigh their decisions based on factual information and will be less likely to be dissuaded from vaccination.

Machine learning classification methods will be employed to enable the Digital Assistant to distinguish between factual and fake information. I will first compile a dataset that includes a number of public articles, posts, and chat-threads from Google and popular social media platforms, i.e. Instagram, Facebook, and Reddit that contain incorrect information about COVID-19 vaccinations. Unwanted variables such as URL, authors, usernames, data posted, and category will be filtered out, and the format and structure of these articles will be adjusted to maintain consistency. I will then extract those linguistic features e.g., word sentiment, percentage of stop words, informal language, and certain keywords, relating to wellknown vaccine myths using Linguistic Inquiry and Word Count (liwc2015) software. Up to 90 features from the text will be extracted, and each text will be classified into one of the categories in psychological impact. These input features will then be used to train machine-learning models. Each dataset will be divided into training and testing sets with a 70/30 split, respectively, and the set will have a similar distribution of articles: posts: threads, where each will be shuffled to ensure a fair allocation of false and true information in the training and the testing instances. Since these models will be more complex in nature, I will use more data in training for cross validation. To build the classifier, I will use an ensemble of methods including logistic regression, random forest (RF) , and multilayer perceptron (MLP) learning models. Logistic regression will be used to classify fake/true information because the text is being classified from a wide feature set into binary sets. Since the features are high dimensional, represent different categories (calculated from the liwc2015), I will use multi-layer decision trees (RF) and (MLP). Furthermore, the (RF) contains a lower error rate in comparison to other models, due to low correlation among trees. Each model will be trained multiple times with different sets of parameters using grid search to optimize the model also

76 Chapter 6. Computer Science and Engineering

to prevent over-fitting or under-fitting the data.

Feature 3: Scary COVID Statistics

The usage of fear is not a new method of persuasion in public health; many medical advertisements utilize the method emphasizing on potential danger of health risks that individuals might experience if they do not adopt the messages’ health recommendations. The second feature of the Digital Assistant focuses on the dangers of COVID for all groups of people, and the vaccine’s potential benefit on reducing the rate of death. The user will provide information such as age, specific health conditions, and location to determine a personalized COVID risk score before and after vaccination. Two scores would be calculated: risk of contraction and risk of death.

Again, machine learning classification methods will be employed to enable the Digital Assistant to formulate the two risk calculations based on trends in public health data. The dataset would include medical records of COVID positive patients with traits (age, medical history, heath conditions, exposed to outbreak environment, etc.). For the development of the score and the ML models, patients would be classified according to their disease severity of non-severe: patients who tested positive for COVID-19 but were neither admitted to the ICU nor died of any cause during their hospital stay, and the severe: patients who tested positive for COVID-19 and required ICU admission at any stage during the disease, and the extreme severe cases: death of any causes during their hospital stay. Demographic data will be extracted from the record and included age at the time COVID-19 test was conducted, sex, weight, height, and body mass index (BMI), and specific health conditions, specifically, substance use (nicotine, alcohol, drugs), cardiovascular diseases, pulmonary diseases, type II diabetes, and cancer, etc. The training and test set will then be randomly divided into 80/20 for internal training and stratified for severe and non-severe cases in each set. The total score will be calculated from all of these parameters for each patient in the training and the test sets. For the training set, I will use the local regression fitting function (LOESS) to plot parameters against severity. The probability of a severe outcome can be determined by fitting the total multivariable score to the observed outcome using logistic regression. Area under the receiver operating characteristic (AUROC) can be used to evaluate these classification models and quantify, specifically, the predictive value of the score. For building the classification models, I would use an ensemble combination of models including the logistic regression model, decision tree induction (DTI) using a variation of classification and regression trees (CART), random forest (RF), k-nearest neighbor (kNN), and multilayer perceptions (MLP). Parameter values will be scaled to the range between 0 to 1, with exception for (DTI) and (RF) where the original parameter values will be used. All models will be trained using repeated k-fold cross-validation for model evaluation and revision.

Adaptive Trial Design: determining the most effective methods

A trial will be developed to find out which of the multiple persuasion tactics are effective in getting the public vaccinated. I will first recruit trial participants of both gender and various race, age, occupations, and religions, then randomize the participants into a number of groups in which the participants may have a similar character, e.g. race or religion, yet with other characteristics randomized. I will subject each group to a persuasion tactic then determine the best and the worst persuasive features based on users’ responses over time. One round of trial will last 4 weeks. At the end of each round, the participants will fill out a short survey asking if they are willing to get vaccination, and a sentiment analysis will be conducted on the survey responses to determine if the persuasion tactic has moved the sentiment of the participants toward getting vaccinated. In the next round of the trial, a bigger portion of the participants will be assigned to the methods with higher effectiveness as determined in the previous round and their sentiments will be determined again after 4 weeks. Methods with less effectiveness will be dropped over several iterations, and the methods that are particularly effective will emerge in the meantime. User responses can also be grouped by age, race, or political ideology to examine the effect of these characters on trends.

References [1] David Kestenbaum. The Elephant in the Zoom. URL: www . thisamericanlife . org / 736 / the herd/act- two- 5. [2] et al Lin Jianchang. A General Overview of Adaptive Randomization Design for Clinical Trials. URL: www . hilarispublisher . com / open - access / a - general - overview - of adaptive - randomization - design - for clinical- trials- 2155- 6180- 1000294.pdf.

6.4 Myers, Hopper. Condensed Design Proposal AE&D 77

[3] Lauren Neeragard and Hannah Fingerhut. Vaccine

Wariness Dips; Obstacles Remain. May 2021. URL: digital . olivesoftware . com / Olive / ODN /

PhiladelphiaInquirer/shared/. [4] James Sudakow. A Good Story Is Always Far More

Persuasive Than Facts and Figures. Aug. 2017. URL: www . inc . com / james - sudakow / why -

a - good - story - is - far - more - persuasive than- facts.html (cited on page 74). [5] Sabrina Tavernise. Vaccine Skepticism Was Viewed as a Knowledge Problem. It’s Actually About Gut

Beliefs. Apr. 2021. URL: www . nytimes . com / 2021 / 04 / 29 / us / vaccine - skepticism beliefs.html (cited on pages 74, 75).

6.4 Condensed Design Proposal AE&D

By Dan Myers ’22 and Miranda Hopper ’22

On August 11th, 2021, near Orlando, FL, a woman named Shamaya Lynn was shot during a Zoom call in her home. Someone on the call called 911 and reported that they had seen a toddler before hearing a loud noise and witnessing Shamaya falling backwards out of her chair. Investigators concluded that Shamaya’s young daughter had gotten ahold of an unsecured handgun and discharged it, fatally wounding her [3].

On April 17th, 2021, in Baker, LA, an unsupervised three-year-old got ahold of their father’s newly purchased semi-automatic pistol (purchased for self-defense) while he was making lunch in the other room. The child was pronounced dead at the scene, having pulled the trigger, fatally shooting themself [1].

Teenagers aged 14-17 were the largest group affected, followed by children aged five and under. Seven in ten of these unintentional shootings occured in the child’s home. (https://everytownresearch.org/report/notanaccident/) In 2017 and 2020, there was a noticeable surge in unintentional child shootings. This aligns with the surge in the number of guns in the United States experienced in 2017. Of the shootings where information on the gun used could be obtained, 85% of the incidents involved a handgun, rifles and shotguns made up 7%, and assault-style rifles contributed less than 1% [2].

So where does the problem lie? It’s a complicated answer. Gun culture in America isn’t going away anytime soon, or likely ever. What can we do to reduce the rate of incidents in a country so obsessed with their firearms? To attempt to come to a conclusion, you first have to look at what’s already been done. According to the American Academy for Pediatrics (AAP), the safest home for a child is a home without guns. AAP states that “the most effective way to prevent unintentional gun injuries, suicide, and homicide to children and adolescents, research shows, is the absence of guns from homes and communities” [4]. This, of course, isn’t a realistic solution.

Some of the most common safety measures in households that do have guns are gun safes/lockboxes, gun trigger locks, and ammunition lockboxes. It is also recommended that guns are not just hidden but properly stored and locked whilst unloaded. In addition to this, it is suggested that ammunition be stored separately. The AAP advises gun owners to keep the safety catch in place at all times and to not allow children to handle any weapons, no matter if it is unloaded, safety is on, etc [4]. Maintaining gun safety in its multifaceted methods is one task the individual home/gun owner can accomplish, but children tend to socialize in and around environments outside of their own home. As a result, their safety cannot be guaranteed, thus the AAP recommends parents determine if there are unlocked guns in a house or building prior to allowing their child to visit. More than a third of all unintentional shootings of children take place in the homes of their friends, neighbors, or relatives. Lastly, and arguably most importantly, the AAP strongly urges parents to educate their children about gun safety and inform them that guns are a serious danger to them if mishandled. Parents must remind their children that what they see in the media such as movies is not reality, and that firearms are weapons with very real dangers.

While there is no valid reason for not owning a gun safe, many people across the United States present one particular reason for not purchasing a gun safe. The leading cause of this can be summarized as wasting time in a self defense situation. Generally speaking, gun owners do not want to be fiddling with a lock or keypad once or even twice if they have ammunition safes in a life or death situation. There’s also the problem of cost. Neither a gun nor an ammunition safe are cheap. Gun owners of lower income may not feel the need or have the means to invest in both or either.

The bottom line is that children should not have to be in danger over their parent’s or an adult’s choice to own a personal firearm. The fact that there are statistics specifically on children being involved in accidents with firearms should speak for itself. The overarching hope is to reduce the incidents of young children, typically from six months old to roughly eight years old, unintentionally harming themselves or others. In the United States, roughly five percent of annual gun deaths are unintentional shootings by individuals under the age of 18. Roughly 91 percent

78 Chapter 6. Computer Science and Engineering

of these victims are also under 18, making for a tragedy that is uniquely American [2]. The solution has to be functionally childproof, yet still fulfill and address the wants and concerns of the adult that owns the gun. It must be uninteresting and challenging enough that it’s difficult for a child to unlock it, but simple enough that an adult could quickly remove it if need be.

References

[1] The Advocate. Toddler gets ahold of gun, dies in accidental shooting while dad was making lunch. Apr. 2021. URL: https : / / www . theadvocate . com / baton _ rouge / news / crime _ police /

article _ 6870e4aa - 97c2 - 11eb - 9942 5bc77d4fa12f.html (cited on page 77). [2] Everytown. Preventable Tragedies. Aug. 2021. URL: https : / / everytownresearch . org / report / notanaccident/ (cited on pages 77, 78). [3] NBC News. Toddler shoots, kills mom during video call after finding gun, Florida police say. Aug. 2021. URL: https://www.nbcnews.com/news/ us - news / toddler - shoots - kills - mom during - video - call - after - finding - gun n1276722 (cited on page 77). [4] American Academy of Pediatrics. URL: https : //www.aap.org/ (cited on page 77).

6.5 An Ethical Future for Tech

By Julia Stern ’22

The past decade has been fraught with cases of racebased algorithmic bias, lawsuits over reckless data collection [10], and job losses caused by automation [4]—artificial intelligence has brought a new wave of uncertainty to the world of tech, and its rapid expansion will amplify these risks in coming years. There have been many efforts to combat the ethical risks of technological development, but few have been successful. The only way to create fair, safe, and equitable AI systems is to overhaul the values and practices of the tech world. A brighter future for tech must involve these key efforts.

Ethics-Sensitive Computing

Ethics should be a priority, not an afterthought—this belief will drive ethics-sensitive computing. It is a conceptual practice more than a concrete one, and it hinges on the widespread recognition of ethics in the tech world, both in formal and informal spheres.

Any computer science education lacking an ethical component is incomplete [7]. In middle-school and highschool computer science curricula, there must be an underlying emphasis on human-centered design. At every level of education, students must recognize that technological development does not exist in a vacuum—transformation in the digital world leads to change in the physical world.

Ethics education becomes most critical at the university level, especially for students that intend to work in technology and related sectors. Computer science programs are the optimal spot for the introduction of ethicssensitive computing, as students have yet to encounter the economic pressures of the tech world. It is a simple step forward. Students must enroll in an ethics course as part of their degree requirements—these courses will specifically target the ethics of computing—and a new generation of computer scientists, engineers, data scientists, etc. will be familiar with ethics-sensitive computing before they enter the workforce. To a modest extent, this effort will encourage self-correction among the future leaders of tech.

For ethics-sensitive computing to work, however, having employees dedicated to the ethical concerns of technology is necessary. For smaller companies, this initiative could be an Ethics Specialist; for larger companies, it could be an Ethics Team or an entire department devoted to ethics. Similar to medical professionals, lawyers, or teachers, these individuals must be licensed, a process that involves rigorous initial preparation from an outside institution and yearly training.

The goal of an Ethics Specialist or Team is to actively promote ethical behavior rather than reprimand harmful behavior. Some of their responsibilities are as follows. 1. Ethics Specialists/Teams carefully review, test, and assess a new technology before its release, and they are able to make suggestions or raise concerns without pressure from other parts of the company. This action also facilitates accountability in artificial intelligence. If ethical concerns about new technology are ignored, then the culpable entity is more clear—the people who ignored them. If an Ethics Specialist or Team fails to predict ethical consequences, however, then accountability remains a tricky undertaking. 2. Ethics Specialists/Teams oversee the selection and cleaning of data, and after the completion of a new product or technology, they use standardized anti-bias metrics to assess its “fairness factor” and

6.5 Stern. An Ethical Future for Tech 79

identify algorithmic flaws. Another component of ethics-sensitive computing is the identification and deconstruction of bias, a joint effort between tech employees and ethics experts. Simply put, “labels matter” [5]. Their biases are often hidden, impossible to detect when designing an algorithm, but clear as day when the algorithm functions. Artificial intelligence is dependent on data collection, so ethics experts must assess data before it trains machines. 3. Ethics Specialists/Teams must reflect diversity and inclusion on multiple levels. This commitment means diversity of demographic features like race and gender, but it also requires a diversity of background, experience, and knowledge. It is important to note that ethics experts can come from any background—professionals from non-tech fields can perform this job, namely academics, lawyers, or mathematicians, but a solid understanding of technology is always necessary. On a multi-person team, it is preferable to achieve a mix of different professionals. Thorough and legitimate diversity ensures the proper execution of ethics-sensitive computing. 4. Ethics Specialists/Teams promote the practice of ethics-sensitive computing. They are responsible for the continued education of company employees, and they ensure that ethics is a priority at every level of a company. They are proactive, not reactive. They are in tune with the community around them, and they think big picture, grasping the full impact of their work in different communities.

Education and supervision are two factors that will further ethics-sensitive computing. Bottom-up change is the first step towards equitable and human-centered tech.

Rethinking FAT

As expert Dr. Aarti Singh explained in a guest lecture, FAT stands for the three factors that ethics should always consider: fairness, accountability, and transparency. FAT [8] is a good starting point for the creation of an industrywide ethics code, but to further protect consumers from hidden abuse, I propose FAAT.

The additional A stands for Autonomy. Autonomous consumers can act in accordance with their beliefs, desires, and morals, and they are free from the control of outside influences. The tech field must honor people’s right to choose; citizens can choose to safeguard their data, to prefer the ‘un-optimized, un-efficient [7] option, to trust humans over machines, etc. even if these decisions are technically “irrational.”

Coders and corporations must acknowledge that they don’t know what’s best for individuals and that to believe they do is both dangerous and arrogant. Even the addictive, deceptive designs of social media networks undermine the autonomy of the individual. These practices are flagrant abuses of power. Human-centered technology strives to improve standards of living, not diminish them. Respecting the autonomy of consumers is essential for ethical computing, and it complements the values already set forth by FAT. FAAT should inspire the development of an industry standard of ethics.

An Industry Standard of Ethical Conduct

There have been previous attempts to create “honor codes” [2] for the tech industry, but they have largely failed for one reason: “self-regulation is not enough” [1]. This is the blaring reality of Big Tech—as long as ethical concerns are caught in the crossfire between economic and social interests, pressure can never derive from the internal workings of Big Tech. But if pressure comes from outside sources, mainly consumers and institutions concerned with public well-being, then an industry-wide “honor code” could work. A legitimate, trustworthy, and neutral organization must establish an industry standard of ethics for the tech field.

We socially regulate other institutions—why not technology? We have expectations for the moral conduct of doctors, lawyers, and teachers—why not apply the same standards to the leaders of tech? If an ethics code reaches sufficient recognition and validity, it is likely that technology professionals will respect it without legal reinforcement. With an established benchmark for ethical conduct, regulation becomes more straightforward, as individuals can be suspended, fined, or fired if they breach the expectations for ethical tech.

Tech giants continue to evade accountability for their abuse of privacy, and as artificial intelligence expands tech’s reliance on data collection [3], the asymmetry that already exists in consumer-corporation power relations will grow. Consumers have to restructure their engagement with Big Tech. Social norms are a powerful tool, and we can use them to bolster the ethical expectations established by a formal code. An ethics code will only work if citizens recognize that deception, data mishandling, and privacy abuse are never acceptable [9]. As we march towards a data economy [6], it is necessary that consumers regain the digital power behind the elusive doors of Silicon Valley.

References [1] J Buolamwini. Announcing the Sunset of Safe Face Pledge. Feb. 2021. URL: https://medium.com/ @Joy.Buolamwini/announcing- the- sunsetof - the - safe - face - pledge - 36e6ea9e0dc5 (cited on page 79).

80 Chapter 6. Computer Science and Engineering

[2] J Buolamwini. The Algorithmic Justice League,

Joy Buolamwini. URL: https://www.ajl.org/ about (cited on page 79). [3] M Burgess. What is the Internet of Things? WIRED explains. Feb. 2018. URL: https://www.wired. co.uk/article/internet- of- things- whatis- explained- iot (cited on page 79). [4] W Knight. China Wants to Replace Millions of

Workers with Robots. Dec. 2015. URL: https : / / www . technologyreview . com / 2015 / 12 / 07 / 164672 / china - wants - to - replace millions - of - workers - with - robots/ (cited on page 78). [5] Benjamin R. Assessing risk, automating racism. 2019. URL: https : / / winchesterthurston . myschoolapp . com / ftpimages / 1531 / download / download _ 6420554 . pdf (cited on page 79). [6] Gabe Scelta. Data Economy: Radical transformation or dystopia? 2019. URL: https : / / www . un . org / development / desa / dpad / wp - content / uploads / sites / 45 / publication / FTQ _ 1 _ Jan_2019.pdf (cited on page 79).

[7] J Shaw. Artifical Intelligence and Ethics. Jan. 2019. URL: https : / / www . harvardmagazine . com / 2019 / 01 / artificial - intelligence limitations (cited on pages 78, 79).

[8] Taken from Dr. Aarti Singh’s lecture (cited on page 79).

[9] C Véliz. Privacy Matters Because It Empowers

Us All. Sept. 2019. URL: https : / / aeon . co / essays / privacy - matters - because - it empowers- us- all (cited on page 79).

[10] Z Wichter. 2 Days, 10 Hours, 600 Questions: What Happened When Mark Zuckerberg Went to Washington. Apr. 2018. URL: https : / / www . nytimes . com / 2018 / 04 / 12 / technology / mark - zuckerberg - testimony . html (cited on page 78).

All outstanding work, in art as well as in science, results from immense zeal applied to a great idea.

– Santiago Ramón y Cajal

The next page

This article is from: