The Wellesley Globalist: Volume V, Issue II "Instability"

Page 1

Volume V, Issue II

INSTABILITY


D

Letter from the Editor:

ear Globalist Readers,

The Spring issue of the Wellesley Globalist discusses a wide array of subjects, from Cuban immigration policy to cross-cultural perceptions of educational “gap years,” from the legacy of the Iran Nuclear Deal, to narratives around the atom bomb in Japan. Though the topics alone initially seemed disparate, they began to form a cohesive whole, linked by a prevailing sense of instability. Thus, the name of our issue. During these uncertain times, it is our mission as a student publication to engage in the ongoing process in listening, learning, and amplifying the voices of those who stand to lose the most. We as a staff believe that considering diverse perspectives on international affairs is essential to growing as individuals, students, and global citizens; our work this year was an effort to recommit ourselves to this goal. On behalf of the The Wellesley Globalist staff, I would like to thank all our writers and contributors whose hard work and dedication brought this issue to life. We hope you enjoy this issue of our publication and encourage you to contribute articles or photographs to our next issue. Please reach out to us if you have any questions or feedback.

B

est, Your Editor-in-Chief Amanda Kraley ‘17

Editorial Staff: Editor-in-Chief: Amanda Kraley

Associate Editors:

Managing Editor: Zarina Patwa and Sarah Shireen Moinuddeen

Mallika Sarupria Yashna Shivdasani Christine Roberts Tarushi Sinha Yashna Jhaveri

Production Editor: Eliza McNair Copy Editors: Laura Maclay and Anastacia Markoe

Layout Staff: Amanda Kraley Zarina Patwa Yashna Shivdasani Christine Roberts


Table of Contents: pg 3

Wet Feet, Dry Feet Policy: Evolution and Change by Yashna Jhaveri

pg 7

Gaps in Between: Cross-Cultural Perceptions of Gap Years by Isabel Yu

pg 11

Liberalism and Surveillance: The Ethical Problem of the New York Police Department Muslim Surveillance Program by Callie Kim

pg 15

President Obama’s Legacy and the Iran Nuclear Deal by Mallika Sarurpria

pg 18

The Atomic Bomb and Its Narratives by Makiko Miyazaki


Evolution of the Wet Feet, Dry Feet Policy by Yashna Jhaveri

3

The “wet feet, dry feet” policy was implemented in 1995 as an amendment to the open-door policy of Cuban immigration to the United States, as outlined in the Cuban Adjustment Act (CAA) of 1966. This policy marked a turning point in U.S. policy towards Cuba, and was the first attempt to normalize relations between the countries. President Obama’s decision in January 2017 to repeal the policy marked yet another turning point in this complex relationship. The reversal of the policy, seen as the final step in normalizing relations, puts an end to special treatment of Cubans, and aligns Cuban policy with modern policy. The Cuban government also welcomed the end of the policy, opening its borders to accept the return of nationals. In this article, I will examine the evolution of and recent changes to the “wet feet, dry feet” policy with reference to impact of changing administrations, interactions between executive and legislature, role of the media, shifting public opinion, and formation of interest groups. The CAA of 1966 allowed anyone who had fled Cuba and entered U.S. waters to remain in the U.S. The introduction of “wet feet, dry feet” in 1995 made the CAA more nuanced. It stated that for those who fled Cuba, those able to make it to the U.S. (dry feet), could apply for residency a year later, while those intercepted in U.S. waters (wet feet) would be sent back to Cuba or a third

country. Recently, President Obama reversed the policy, suggesting the end of eased U.S. immigration for Cubans and replacing it with fairer, equitable rules. A chronological analysis of Cuban-American immigration policy. over these three crucial phases will allow us to see the transition from a relatively flexible policy, to a rigid policy, to the abolition of policy altogether. The first formal change in Cuban


immigration policy to the U.S. took the shape of the CAA. The precedent for its passage in Congress came from a 1965 speech by President Lyndon B. Johnson, where he advocated for some of the most liberal U.S. immigration policies in the last 40 years. Inviting Cubans who desired freedom was also another step towards rallying more supporters of democracy, and specifically U.S. democracy. He also referred to the social contract in the Declaration of Independence, which posits that individuals consent to surrender some freedoms to authority in exchange for protection of their rights. Therefore, the Castro government’s oppressive policies and extreme punishments warranted U.S. involvement to guarantee safety and inclusion of Cubans. Johnson framed the Cuban issue from a viewpoint of American righteousness and duty to protect citizens in oppressed regimes. Moreover, his decision to go public with his appeal, and

Photo By: Danny Hammontree

directly address the people, prompted them to support his vision, which possibly pushed people to put pressure on their congressmen to fight for Cuban rights. Passing the Immigration and Nationality Act (INA) in the same year, which aimed to reunite families of immigrants while attracting skilled labor to the U.S., alongside Johnson’s appeal, led to the CAA being passed by Congress in 1966. Therefore, the CAA found its motivations in the indirect actions that President Johnson, as leader of the executive branch, wielded over the Congress. We also see President Johnson’s active work ethic, especially in foreign policy. Presidents usually have more leeway with foreign policy, as Aaron Wildavsky mentioned in his book, as they are able to act more quickly, work with less interference, and face issues of greater consequence. Therefore, the significant foreign policy power that Johnson had over the legislature with regards to Cuban immigration allowed the INA to pass largely uncontested by both parties in Congress, thereby paving the way for the passage of the CAA. In 1995 came the first major deviation from the CAA, the unprecedented “wet feet, dry feet” policy. This policy was the result of negotiations between the Clinton administration and Cuban government in an attempt to normalize relations between the countries and avoid future refugee crises. One such refugee crisis was the Mariel boatlift. The Mariel boatlift incident of 1980 had led to the death of 27 Cubans onboard, forced involvement by the U.S. military, and increased the crime rate in Florida. The incident emerged in the aftermath of an attempt by Castro to export Cuba’s problems, namely job and housing shortages, to the U.S. Clinton used the precedent of the Mariel boatlift and similar crises to maneuver his executive body into a negotiation with the Cuban government

4


5

to prevent similar crises in the future. These negotiations were conducted in secrecy, with clear exercise of the executive’s powers. This ability of the executive to act independently can be understood as separation of powers, or division of government into distinct bodies to concentrate on respective functions. Therefore, the executive branch exercised power through negotiations. Negotiations were deemed necessary to respond to the pressing balsero issue and to tighten the valve on Cuban immigration, which had become increasingly chaotic and dangerous. The indefinite detainment of 30,000 balseros, or Cuban rafters, in Guantanamo Naval Base with low chances of being granted admission to the U.S., allowed Clinton to leverage the balseros’ freedom, and thus gain bargaining power. The pressure of mass detention of balseros forced Castro to the negotiation table, and then Clinton was able to negotiate an agreement that would, in effect, allow the detained balseros to enter the U.S through a quota system, while simultaneously preventing the further influx of 11 million people who might have tried to flee Cuba for the U.S. Finally, the guarantee of repatriation for those Cubans caught at sea and Cuba’s willingness to accept repatriated Cubans concluded the negotiations. Therefore, keeping the negotiations secret until the end and limiting them to the executive branch allowed Clinton to resolve the pressing issue of detained immigrants, while improving the Cuban immigration process. A concentration of responsibilities on the executive branch further eliminated the possibility of time lags that Congressional actions may have incurred. Therefore, continuing the argument that presidents prefer to work on and face more successes in foreign policy issues, we see that President Clinton and the executive’s ability to work independently allowed for swift realization of goals without domestic interference.

In addition to understanding how Clinton exercised power, we need to look at the context of the negotiations. Castro’s willingness to negotiate, or even be open to repatriation, might have been determined by a post ColdWar context, where Cuba’s primary ally and the U.S.’s primary enemy, USSR, no longer existed. This created a situation where the U.S. could deal with Cuba without directly or indirectly giving the impression of Cold War motives. Context allowed the issue to be framed as direct U.S. engagement with Cuba, rather than covert actions. The U.S. also realized that in the post-USSR global world, dynamics would change, and thus tried to envision a world with a post Cold-War, postCommunist Cuba. The “wet feet, dry feet” policy facilitated direct communication, and gave Cubans immigrant status rather than oppressed victim status. This refreshing treatment of Cuban immigrants was viewed favorably by Castro, and thus, compounded with the power of the executive to negotiate, played a significant role in opening up negotiation channels and creating policy. The most recent development in U.S.Cuba relations comes with President Obama’s repealment of the “wet feet, dry feet” policy, and expansion of communication efforts between the two countries. Precursors for this reversal can be found in the contentious treatment of Cuban immigrants and increased advocacy for democracy in Cuba. The contentious treatment of Cubans brings one name to mind; Elián González, a young child who became embroiled in international custody issues resulting from Cuban immigration policies. His case was widely reported by a media that turned his story into advocacy journalism, intentionally adopting a non-objective viewpoint to achieve a social or political purpose. One of the keystones of the case was a highly publicized photograph by an Associated Press (AP) reporter of the government raid


to rescue Elián. This image dominated the news for days, giving the impression that Elián’s feared expression made the raid seem like a kidnapping, as opposed to a government sanctioned operation. The Clinton administration responded by releasing a photo of the happy father and son, reunited in Cuba. Elián’s return to Cuba was sensationalized by news outlets, with Cubans receiving him as, as Coffey describes in his book, Spinning the Law, a “miracle child”. Therefore, the AP photograph had the effect of stirring up anti-government protests, with protesters chanting “Libertad para Elian,” while it is widely acknowledged today that Elián was in fact glad to return home. By playing on themes of youth and parental rights, Elian’s story escaped the 24-hour news cycle, and his trial took place just as much in a formal court, as in a court of public opinion facilitated by the media. Obama’s policy reversal has also been viewed as an expected, somewhat natural progression of Cuban immigration in the U.S. There has been a rise in pro-Cuban democracy interest groups in the aftermath of the 1995 policy and Elián incident, such as Center for a Free Cuba (1997) and Cuba Democracy Caucus (2004), that are working towards a free, democratic Cuba, an interest that the U.S. shares. This has allowed the executive branch to defer its duties to proxy interest groups who share their interests.

Moreover, Fidel Castro’s death led to a decline in repressive leadership, and thus diminished need for protection of Cuban citizens, and prompted the return to a more fair policy of immigration. Finally, increased formal communication between the two countries and end of the embargo have led to more democratic endeavors in Cuba, which will work to liberate the country. All these factors provided Obama the opportunity to end his term by repealing the policy and reducing involvement in Cuba, re-establishing formal relations and communication. For all the certainty that the end of Obama’s term brought for U.S.-Cuba policy, the future is still uncertain. Smith, a reporter from The Weekly Standard, believes that Obama intentionally reversed this policy at the end of his term in order to set a trap for Trump, to force him to admit Cubans, immigrants of Latin-American descent, whose inclusion he has been advocating against. However, Trump has not directly commented on the “wet foot, dry foot” policy. His general stance towards the strengthening of immigration policies has been evident, in his campaign and, with his recent executive ban on refugees from seven countries. Therefore, in light of a strong media that acts as a watchdog over the government, and holds officials accountable for their actions, it is unlikely that Trump will reinstate the policy, for it may undermine his presidency and promises.

“In light of a strong media that acts as a watchdog over the government, and holds officials accountable for their actions, it is unlikely that Trump will reinstate the [wet foot, dry foot] policy, for it may undermine his presidency and promises.”

6


Gaps in Between: Cross-Cultural Perceptions of Gap Years by Isabel Yu

7

As senior year of high school rolls in, tension across campuses begins to rise. High schoolers stress about college applications and graduating college seniors become exponentially pressured to have something lined up immediately after graduation. For students in most developed countries, the road beyond graduation is pretty straightforward. After high school comes college and after college, you can either go to graduate school or start working. However, the concept of having a gap year or two has become increasingly popular in recent years. During this time off, some choose to travel, some choose to work or intern and some choose to volunteer. No matter what they choose to do, those who have taken gap years have almost always come out of their time enriched and recharged to begin school again. Unlike the United States or Europe, not all societies and cultures are as familiar with, or as understanding of, what it means to “take a break.� In many Asian countries, time dedicated to non-career related activities tends to be socially frowned upon. Few parents and schools encourage students to take time off to figure out who they are and what they want in life. In fact, these differing attitudes towards the concept of gap years can reflect how open-minded and how flexible the thinking of a particular society is. Societies that shy away from social changes tend to operate under the assumption that there is a

fixed schedule for how one should live one’s life and in turn feel uncomfortable with the concept of a gap year. Societies that value innovation and uniqueness and place less value in an absolute standard, on the other hand, are more likely to accept that career timelines may look different for everyone and embrace the intangible values of gap years. Educational trends in the United States epitomize the popularity of gap years amongst students nowadays. While there


is little data on the number of high school students who have taken gap years, the American Gap Association has recorded a decade high in 2015 in the amount of grant money given out, in gap year fair attendance and in recorded interest in a gap year. At the same time, professional post-graduate schools such as law and medical schools have witnessed a massive decrease in students entering straight into those programs as undergraduates. As of 2013, over 70 percent of students in the entering class in both Harvard Law and Medical Schools had at least one year of non-academic experience prior to their application according to the Harvard Crimson. There could be many reasons for this surge; one may be the nature of the educational system here in the U.S. Unlike other countries, most professional degrees are post-graduate degrees here. Students rarely have the opportunity to specialize and acquire technical skills as an undergraduate

Photo By: Christian Bucad

student. As a result, people have much longer to decide what they want to do for a career. Even in university, students do not have to choose a major until the end of their sophomore year. The amount of buffering time students have until they have to make concrete decisions about their career makes way for them to experience a lot more before specializing. Taking a break in between high school and college or in between college and graduate school thus makes less of a difference because few people have truly specialized at that point. Another reason why gap years may be so popular in the U.S. is because both schools and employers recognize the value of soft skills and are willing to provide training for hard skills. Liberal arts colleges, in essence, embody a style of education that emphasizes soft skills such as analytical skills, critical thinking, good writing, and presentation skills. These skills are not limited to only one field of study or major. Many graduate schools are also open to the idea of accepting students outside the traditional field of study. For instance, biology students can easily apply to law school and math majors are able to apply to master’s programs in computer science if they wish. Having a flexible educational system gives students reassurance that regardless of what they choose to fill their time with in their time off, it will always be a worthwhile experience and addition to their resume. In addition to schools, American family and culture also tends to be more flexible. While admitting that there are still many families who believe that children must go to college, graduate, have a good job, get married and have children, the social atmosphere in the US is rather unrestrained compared to that in many other parts of the world. Thus, not being ready for graduate school or college is a reason that is acceptable to many parents for wanting to take time off.

8


Photo By: Richard Foo

9

Fewer parents and schools conclude that gap years are a waste of time, and in response, students become more and more willing to explore on their own accord. Europe, in fact, caught on to the trend of gap years much earlier than the U.S. did. One significant difference between educational systems of these two continents is that in Europe, university is not universally as important as it is here. Lots of students choose to start working after high school, and many choose to enroll in specialized schools. A bachelor’s degree, therefore, is not as of an advantage in the job market as it is here in the US. Similarly, as students in Europe choose their specialty straight out of high school, university rankings are a little more arbitrary as different universities, colleges or specialized schools cater to students studying different subjects. Different from the U.S., entry to universities is non-selective, and all who complete high school education could be admitted into college. Nevertheless, the dropout rate of college students is still

close to 50 percent in France according to the University World News. University in that part of the world is no longer directly correlated to success. Without the pressure of needing higher education to succeed, European high school and college students are then freer to make alternative choices without the worry that comes with taking time off. This also helps to cultivate an atmosphere that allows for deviations from the traditional path of success. Unlike Western societies, Asian, and especially East Asian cultures, think and work in a very different manner. Social consensus states that there is a traditional path to success, and you can only succeed through education. South Korea, for example, holds the highest total enrollment in tertiary education in the world according to the Organisation for Economic Co-operation and Development (OECD). Singapore on the other hand, also consistently tops OECD rankings in quality, equity and efficiency in their schooling system. Education, and especially higher education, is given such high importance,


that both state and private entities are willing to invest millions into their students. Korean students spend 220 days a year in school in comparison to the 180 in the US and spend up to $18 billion U.S. dollars a year on private education. Singapore spends an average of US$650 million a year on tuition fees for a population of only five million. Most of this money is invested into getting high school students into college. When college, to a certain extent, is glorified

leave school. One of the many reasons for this is that when Korean companies will look at an applicant and see a blank between their graduation and the time of application, they often come to the conclusion that this applicant was simply not good enough to be employed straight out of college. As a result, the average time it takes for a student to complete college in South Korea is about five to seven years. Thus, there is simply no room for gap years to become a viable option in

“In a society where parental influence is substantial, many students choose their majors and careers in accordance to their parents’ will. If not, collectivistic mentality also pushes students to pursue what society deems as worthy.” on a pedestal, students have little room to negotiate terms for their future. People value social success so highly that students don’t have the time or the resources to spend on exploring their non-career related interests. In a society where parental influence is substantial, many students choose their majors and careers in accordance to their parents’ will. If not, collectivistic mentality also pushes students to pursue what society deems as worthy. In addition, there is less flexibility in between different career paths in Asia. Jobs often only hire candidates who have relevant degrees and experiences. It is then very difficult for students to recover from a period of time in their educational journey that is not relevant to their career. There is also severe prejudice against students who are not immediately employed upon graduation. College students would rather take semesters off and delay graduation than be in a situation where they’re unemployed by the time they

between schooling for students in this region. Gap years in essence are neither good nor bad. Taking time off may benefit some but have little impact on others. Some may make good use of their time off, and others may just do nothing. However, regardless of whether these decisions are “beneficial,” it should be left up to the individual to make this decision. While Asian countries have one of the most well developed educational systems in the world, rigid social structures and the fear of judgment and adventure have led to discouragement for any behavior outside of the social norms. On the contrary, the concept of gap years flourishes in Western societies that allow for change, difference and individuality. In this respect, perhaps societies must first learn how to embrace deviations from the norm in order to take steps towards gaining educational, social and civil freedom.

10


Liberalism and Surveillance: The Ethical Problem of the New York Police Department Muslim Surveillance Program by Callie Kim

11

In this article, I argue that the discontinued Muslim Surveillance Program in New York City was morally problematic as it caused psychological harm as well as alienated Muslims’ basic civil liberties. Section II gives an overview of the program, explains its rationale, and clarifies different assumptions that are commonly conflated in the discussion of surveillance. Section III explores the utilitarian stances on this issue. Section IV explores my counterarguments. Section V concludes. Content of Muslim Surveillance Program The government’s domestic spying activities have progressed to intrusive levels, primarily due to an increased fear of terrorism after the attacks of Sept. 11, 2001. The Muslim Surveillance Program, operated by the New York City Police Department’s (NYPD) Intelligence Division from 2002 to 2014, was a product of such fear. According to a “Factsheet of the NYPD Muslim Surveillance Program”, which was published by the American Civil Liberties Union (ACLU), the Department engaged in the racial and religious profiling, and surveillance of Muslim religious heads, community leaders, student associations, organizations, businesses and individuals in New York City. The surveillance took place in every mosque within 100 miles of New York and extended to New Jersey, Pennsylvania and

other states. In particular, police designated entire mosques as suspected “terrorism enterprises,” in which they collected the license plate numbers of every car in mosque parking lots, videotaped worshippers entering and leaving and had informants carry hidden microphones. The effect was that many Muslims shied away from participating in religious activities, refrained from expressing their political views and even altered their personal appearances. The rationale for this surveillance is best shown in a report called “Radicalization in the West: The Homegrown Threat,” which was published by the Intelligence Division in 2007. The report claims to identify a “radicalization process” by which individuals turn into terrorists. The “process,” involving Pre-Radicalization, Self-Identification, Indoctrination and Jihadization, is so broad that anyone who identifies as Muslim or engages in Islamic religious practices would be subject to suspicion. Anticipating growing radicalization within the Muslim communities in New York City, the police therefore looked for “hot spots” of radicalization that might give them an early warning about potential terrorist attacks. In operation, they focused on 28 “ancestries of interests”—nearly all were Muslims. In sum, the measures that Muslim Surveillance Program employed reaffirm ACLU’s stance that this surveillance program


to the Muslim communities was based on their race and religion. The moral debate of whether the program was justified began when the Associated Press, a multinational nonprofit news agency, published documents describing the program in 2011 and police subsequently acknowledged that the information they collected never generated a lead. To depict the moral debate about surveillance that is based on race and religion, we have to realize two assumptions about the productivity of such surveillance in preventing terrorist attacks. First, we assume that there is a strong correlation between membership in certain racial and religious groups, and the tendency to commit certain crimes. Second, given this tendency, we posit that police can prevent terrorist attacks if they pervasively surveil the people of those groups differentially. Thus, we assume that surveillance prevents more terrorist attacks than do other measures that share the same support and expenditures. However, the moral problem of surveillance arises when the measures that appear morally problematic

from another perspective (i.e. racial and religious equality) contribute to the provision of national security. There are two issues that commonly conflated in the discussion of the Muslim Surveillance Program. First, Islam is being associated with terrorism. Second, the police uses one’s race, physical appearance such as the hijab , and religious practices such as salat to “assume” one’s Islamic beliefs. Therefore, those assumptions attribute to one’s connection with terrorism. Many discussions have addressed both issues separately, but pay little or no attention to the correlation between them. The first issue concerns the association between Islam and terrorism. This association was further strengthened in public because of the September 11 terrorist attack, when alQaeda, an Islamic terrorist group, declared a holy war against the U.S. in the name of Islam. Then, this association was explicitly shown in the context of Muslim Surveillance Program in which police only single out Muslims for pervasive surveillance, but not other institutions or individuals that belong

Photo of Linda Sarsour by: Occupy Faith

12


13

to any other religious faith. The selection that was based on religion shows that the association between Islam and terrorism is not an ideology, but a social reality. However, is this association factually and morally right? According to “County Report on Terrorism 2011”, which was published by the U.S. National Counterterrorism Center, Muslims suffered between 82 to 97 percent of terrorism-related fatalities over the past five years. This percentage shows that majority of Muslims are bearing the brunt of the terrorism. In addition, Brian Michael Jenkins, a senior advisor to the RAND Corporation (Research AND Development), argues that the term “terrorism” is so broad that it includes attacks by mentally unstable individuals who embrace jihadist ideology only to rationalize their aggression. He also points out that Jihadist terrorists killed fewer than 100 people in the United States since Sept. 11, 2001 — specifically, an average of six or seven jihadist-inspired murders a year with an annual average of 14,000 to 15,000 homicides is a far better outcome than many people had feared in 2001.Thus, statistically speaking, the association between Muslim and terrorism is insignificant. Given this false association between Islam and terrorism, some might ask, “why, then, did the police still implement this surveillance program that disregarded the disadvantaged status of religious minorities?” The response is yes — the surveillance program might nonetheless be morally problematic because religious profiling is a violation of civil liberties that “trumps” security concerns. However, this response does not fully unpack the complexity of surveillance program. As a result, to explain this complexity, we ought to see two points. First and more important, the surveillance program concerns the public good, that is security. Second, situations in which surveillance takes place involve a huge

number of people, and police must make fast decisions about whom to interrogate in order to identify “hot spots,” where terrorist plans might be formatted or implemented. Most of the time, more additional information and time are taken into account for the police to evaluate and act. Profiling—A Utilitarian Approach I have found a parallel between religious profiling and racial profiling, as these types of profiling rely on morally arbitrary factors, such as race. Risse and Zeckhauser in “Racial Profiling” claims that the harm caused by profiling per se is largely because of underlying racism. In other words, acts of profiling are harmful because they make light of some people’s unjustly disadvantaged social status. In short, acts of profiling express the underlying injustice of racism. The use of “focal point”, developed by Risse and Zeckhauser, further illustrates


that the first claim of acts of profiling is purely expressive. They define “focal point” as a symbol of structural disadvantage that transforms from one practice or event. Specifically, the focal point becomes associated with harm attached to such disadvantage, and that harm plausibly accounts for the massive share of the harm associated with that practice. In sum, according to them, if the harm is a focal point, it is principally expressive. In the context of the Muslim Surveillance Program, they would argue that surveillance becomes a focal point in a racist society, as the use of race, physical traits, and religious practice for investigation triggers and reinforces the underlying racism. In this respect, the harm from the program is principally expressive. Counterargument I oppose to Risse and Zeckhauser’s

Photo by: Carly Comartin

utilitarian approach to understand profiling due to two concerns. First, they treat surveillance merely as a method, which does not contribute to the form of racial and religious inequalities when they discuss the harm. That is, they undermine how much harm the surveillance program might contribute, as it reinforces racial and religious inequalities. Second, the utilitarian approach might be permissible to violate racial, religious minorities’ civil liberties if the result of surveillance program is greater security. My first concern directly responds to the type of harm that Risse and Zeckhauser describe as principally expressive. I argue that this type of harm, in fact, is larger than being principally expressive. In particular, their arguments show that racial profiling is not itself a form of racism, so it does not contribute further harms. For them, it is okay not to consider the motives behind profiling, the manner in while profiling occurs, and the consequences of profiling magically clean, innocent, and unscathed. However, in reality, it is extremely implausible that there is no association between racism and racial profiling, including the motives, the manners, and the consequences mentioned above. In fact, I think racial profiling is a legacy of racism since racism has no role in explaining the choice of racial profiling over other ways of responding to racial disparities in crime. Thus, harm caused by racial profiling per se is not just principally expressive as it is derived from background racism. In the same way, the harm emerged from the Muslim Surveillance Program was not just principally expressive, as it employed the use of race, physical appearance, and religious practice for investigation. I argue that such a usage overlooks racial and religious injustice as a crucial force in pushing the formation of surveillance

14


Obama’s Legacy and the Iran Nuclear Deal by Mallika Sarurpria

15

The Iran Nuclear Deal was signed between Iran and a group of world powers, namely the P5+1 (United States, United Kingdom, Russia, France, China and Germany), in July 2015. This was a defining moment in American foreign policy because after years of negotiations, the United States had brokered a deal that prevented Iran from building nuclear weapons. However, the implementation of the deal was hindered by Congress’ Nuclear Agreement Review Act. Thus, after the deal was signed, Obama used not only his power of persuasion but also relied on mass media to convince the Congress to support his deal. From the very beginning of his presidency, Obama had been a strong advocate for the Iran Nuclear Deal. In his NSS strategy, Obama clearly outlined the importance of a world without nuclear weapons and specifically mentioned that the United States must provide Iran with a clear choice and subsequently prevent it from developing a nuclear weapon. As president, Obama had more power over foreign policy which gave him the power to negotiate the deal without Congress’ interference. Moreover, Obama’s approach to this policy was adaptive, especially with respect to the changing situations and opinions around him. This was exemplified by the fact that he made the deal an executive order, rather than a treaty. This gave him the authority to

sign the deal without requiring a two thirds majority in the Senate and thus limited the Senate’s power to influence the deal. Knowing that a Republican Congress would vote against the deal and that senators would represent the individual interests of their respective constituents, this adaptive measure allowed Obama to prioritize the country’s interests. However, this measure especially infuriated Republican senators, who signed Tom Cotton’s Open Letter to Iranian leaders to prevent Iran from negotiating with the Obama administration. This letter aimed to hinder Obama’s ability to control foreign policy. Furthermore, the Nuclear Agreement Review Act of 2015 was introduced by the Congress to limit Obama’s ability to execute the deal. Due to this, in order to implement the deal, Obama required that Congress pass a resolution accepting the deal within 60 days. Thus, after the signing of the deal, Obama attempted to gain the Congress’ approval. Instead of having to veto the Congress’ resolution (if they rejected the deal), Obama persistently sought to maintain and increase congressional support through different methods of persuasion and bargaining. Thus, Obama gave multiple speeches in which he argued that rejecting this deal would reduce America’s legitimacy as a diplomatic world leader and that signing this deal was the only way to negotiate with Iran without the usage


of force. Furthermore, he utilized his own administrative structure and delegated authority to his Secretary of State John Kerry and Defense Secretary Ashton Carter, who led public and private classified briefings to educate lawmakers about the terms of the deal. Additionally, along with conducting multiple congressional meetings and speaking to undecided Democrats to convince congress members the importance of the deal, Joe Biden and Obama also conversed with interest groups to ensure that they were able to reach out to and influence the masses. Obama also extended his power of persuasion to the American public through the usage of media. Complementary political

of the negotiations with Iran had begun before Rouhani’s victory, when the uncompromising leader Ayatollah Ali Khamenei was in power. Along with creating this narrative, Rhodes also collaborated with interest groups like Ploughshares to increase news coverage on the deal. According to their records, Ploughshares contributed to this effort by donating $100,000 to the National Public Radio so that they would emphasize topics related to the deal. This portrays how the media indulged in issue framing by creating a narrative and also increased news coverage of the deal which consequently created an echo chamber. Moreover, by relying on official White House sources, the information provided by the media tended to be biased

“This narrative promoted the idea that the Iran negotiations had begun when Hassan Rouhani, a moderate leader, had defeated the hardline faction and was steering Iran towards a diplomatic approach.” news making is common, especially because journalists require access to official sources while politicians like Obama want to have their opinions heard so that they can garner public support for their policies. In this case, Obama’s national security advisor Ben Rhodes worked to create an echo chamber in American media that used journalists to validate the deal by creating an appealing narrative. This narrative promoted the idea that the Iran deal negotiations had begun when Hassan Rouhani, a moderate leader, had defeated the hardline faction in the election and was steering Iran towards a diplomatic approach. However, this narrative was misleading because the most important parts

and influenced by Obama’s agenda. Thus, the framing and bias worked to Obama’s advantage. It developed the notion that the American government was being decisive by arranging a deal at a time when there was a moderate Iranian government willing to negotiate and support American interests. This diverted attention from the policy alternatives propagated by opponents of the deal and instead made it seem as though the Obama administration had brokered the best deal possible. Obama’s persuasion combined with the pro-deal interest groups’ lobbying efforts successfully influenced Congressmen to support the deal. Counter-efforts by Republicans, such as Senate Majority Leader Mitch McConnell’s call for a procedural

16


motion to move to the final vote to break a Democratic filibuster failed by 2 votes. This was because senators voted 58-42. As a result, Obama was eventually able to implement the deal. Lastly, it is also important to consider President Trump’s take on the policy to fully understand how the policy is developing now that Obama has left office. During the election Trump criticized the deal and even claimed that he would “rip up” the deal. Right after becoming President, the Trump administration informed Congress that Iran was complying with the terms of the deal. However, despite the compliance, recently the administration decided to review the deal to decide whether America should

continue upholding it or decide to break it since Iran is considered a “state sponsor” of terrorism. While Trump has sustained the deal till now, this sudden change in his policy could have been due to the influence of his Secretary of State Rex Tillerson, who has publicly voiced his disapproval of the deal. The fact that Trump has maintained the deal until now indicates that he might actually see it as the best possible deal for now. Moreover, it also portrays how most of the claims candidates make on the campaign trail do not materialize when they become President as they begin to realize the complexity of these issues.

“It portrays how most of the claims candidates make on the campaign trail do not materialize when they become President as they begin to realize the complexity of these issues.”

17

Photo by: Boaz Guttman


The Atomic Bomb and Its Complex Narratives by Makiko Miyazaki It was an embrace that transcended 71 years. On May 27, 2016, President Obama paid a visit to Hiroshima, Japan to commemorate the dropping of the atomic bomb in August 1945. There, Obama exchanged a poignant embrace with Mr. Shigeaki Mori–a Hibakusha (survivor of the bomb) who dedicated his life to finding families of American prisoners of war (POWs) who had perished in Hiroshima. The embrace between the U.S. president and the Hibakusha who shared a sense of humanity with American POWs symbolized the strength of the U.S.-Japan alliance that arose from the ashes of war. At the same time, remembrance of this historically controversial instrument of war also highlighted differences in narratives about the atomic bomb between the U.S. and Japan. These narratives–fragmented and sometimes competing–speak to the subtlety and complexity of the truth. This article seeks to explore these narratives, not to champion a single narrative as correct but to raise questions and assess the extent to which the narratives’ facts can be separated from their biases and perspectives. The American Narrative A common narrative in the U.S. justifies the atomic bomb as the instrument that ended WWII. A speech by President Truman on August 9, 1945, declaring U.S.’ virtual victory in Europe and the Pacific, summarizes this narrative: “We have used [the

atomic bomb] in order to shorten the agony of war, in order to save the lives of thousands and thousands of young Americans.” The narrative seems astonishingly logical. Japan was still launching suicidal military attacks in August 1945. The U.S. dropped the atomic bomb on Hiroshima and Nagasaki on August 6 and 9 respectively, and Japan announced its surrender on August 15. The chronology suggests that the atomic bomb critically destroyed the capabilities and morale of the Japanese people, eliciting a surrender that ended WWII. It is no wonder, therefore, that a majority of Americans hold onto this narrative to this day. In a Gallup poll taken in 1945, 85% of Americans approved of the decision to use the atomic bomb. In 2015, the poll showed that 56% of Americans still believed the atomic bomb was justified. The Japanese Narrative Unsurprisingly, Japan follows a different narrative. According to the Gallup poll, 64% of Japanese did not consider the bomb to be justified in 1945, and an increased 79% of people held this view in 2015. An obvious reason is the unimaginable horror that the bomb released, but there are other reasons that point to how Japan would have surrendered regardless of the bomb. In the minds of the Japanese, the question was not if Japan would surrender, but when. Since most people accepted the need to surrender,

18


19

or at least to negotiate the terms of peace, the Japanese people feel that the atomic bomb was unwarranted. Below are some prominent narratives that highlight this view. 1) Japan was already critically weakened before the atomic bomb: By August 1945, Japan realized that it was losing the war. The U.S.’ incessant bombings throughout 1945 had critically destroyed Japan’s wartime production capacities. According to the U.S. Strategic Bombing Summary Report (Pacific War) in 1946, the bombs decimated production by 70% in Tokyo alone and inflicted similar damage to other major centers of production. In this context, then-premier Kantaro Suzuki recognized that American airstrikes would eventually destroy Japan. As he later reflected, “merely on the basis of B-29’s [American bombers] alone I was convinced that Japan should sue for peace.” Indeed, conditions were bleak in Japan. Approximately 30% of the entire urban population of Japan lost their homes from the airstrikes, according to the aforementioned Summary Report. The airstrikes also crippled the transportation systems and pushed civilians on the brink of starvation (some, like my grandmother, were surviving on vegetable roots). It was impossible for the government to continue the war from material and logistical perspectives. Moreover, the Japanese military was also critically debilitated. The once glorious Imperial Japanese Navy and Imperial Japanese Army Air Service were destroyed, and planes and other resources that remained were deteriorating. Japan was still sending kamikaze suicide pilots to attack the U.S. forces, but given the deteriorating resources, the tactic was approaching fanaticism rather than coherent strategy at this point. Without clear plans and effective instruments to carry out attacks, Japan had no prospect for winning from a military standpoint as well.

The atomic bomb was not necessary in pushing Japan to negotiate for peace. As former Prime Minister Fumimaro Konoye reflected after the war, “[f]undamentally, the thing that brought about the determination to make peace was the prolonged bombing by the B-29’s.” 2) The Soviet Union invaded Japan in August 1945: The air strikes were not the only factor that motivated Japan’s surrender. On August 9, the day of the Nagasaki bombing, the Soviet Union invaded Japanese-controlled Manchuria in clear breach of the JapanSoviet Non-Aggression Pact. The Soviet Union was allied with the U.S. In response to repeated requests by the U.S., the Soviet Union had promised at the


1943 Tehran Conference to invade Japan once Germany surrendered. The Soviet Union promised an invasion long before the U.S. dropped the atomic bomb, yet it chose to invade in August 1945. The timing of the invasion suggests a strategic calculation, in which the Soviet Union sensed that Japan would not be able to stop them. This cements the notion that Japan was too weak to win WWII by August 1945. Indeed, Japan could not stop the invasion. The invasion and its mass causalities pressured top Japanese government officials to seek a quick end to the war to avoid further damage. Although the exact extent to which the Soviet invasion elicited Japan’s surrender (compared to the atomic bomb) is debated, it is clear that the atomic bomb was not the

Photo by David Calhoun

only cause for the end of WWII. 3) Japan had asked for a negotiation of peace as early as April 1945: Regardless of Japan’s inability to defeat the U.S. towards the end of the war, it cannot be denied that Japan was seeking to arrange the terms of surrender months before the U.S. dropped the atomic bomb. In April 1945, Japan made three attempts to communicate to the U.S. and Great Britain through neutral Sweden and Portugal. Japan wanted to “ascertain what peace terms the United States and Britain had in mind,” in the words of then-acting Foreign Minister Mamoru Shigemitsu. Japan was willing to establish peace, as long as the U.S. would not force on them an unconditional surrender. But the U.S. refused to settle for a conditional surrender. Then-U.S. Secretary of State Edward Stettinius Jr. forbade the U.S. Ambassador to Sweden from negotiating, telling him to “show no interest or take any initiative in pursuit of the matter.” This tension between the nature of the surrender–unconditional or conditional– caused a months-long gridlock in Japan’s internal dialogue for peace. The main difference between these two options was the future of Japan’s Emperor. The unconditional surrender could end the war quickly, but it could also lead to the Emperor’s abdication or even the introduction of a republican government. The conditional surrender could retain the Emperor and was considered essential by those who felt they had to defend the imperial government system. There was a fundamental conflict of interest that neither the U.S. nor Japan wanted to negotiate. The U.S. wanted an unconditional surrender to strip Japan of the political system and the military and economic prowess that might enable Japan to be an aggressor again. Japan could not accept the abdication of the Emperor because he was considered divine and was the embodiment

20


Photo By: Giyu (Velvia)

21

of the nation. An unconditional surrender would have stripped Japan of its most fundamental political, religious, and cultural center, as well as the sense of national identity. Some critics who considered the Emperor as synonymous with Japan even worried about the end of Japan as a nation. If the U.S. had agreed to let Japan keep its Emperor from the start, war could have come to an end much sooner than it did, since Japan was willing to accept the other conditions of surrender. But the U.S. was resistant. For the next few months, disagreement about the role of the Emperor persisted between Japan and the U.S., and unconditional surrender was still the official position at the Potsdam Conference in July where the U.S. officially announced the terms of the surrender it wanted. In the end, Japan accepted an

unconditional surrender, but it retained the Emperor as a ceremonial figure under the 1947 post-war constitution that the U.S. helped establish. But practically four months had passed since Japan started seeking for peace. During these months, several destructive battles took place, including the Battle of Okinawa. According to Ted Tsukiyama from the U.S. Military Intelligence Service, the Battle of Okinawa alone led to 72,000 American casualties, not to mention 200,000 Okinawan civilians and Japanese soldiers. Granted, the war might not have ended before the Battle of Okinawa because Germany had not surrendered yet, and it is questionable if Japan would have agreed to a surrender without the fall of Germany. However, the war could have more easily ended in July if the U.S. agreed to let


Japan retain the Emperor at the Potsdam Conference. In this context, the claim that atomic bomb would have saved American lives seems ironic. If the U.S. had let Japan surrender earlier, the war could have ended early, and many American lives could have been saved. Why Such Difference in Narratives? Japan’s narrative that the atomic bomb was unnecessary from material, military, and political perspectives offers a stark contrast to the common American narrative that justifies the atomic bomb. Why is there such a difference? Do Japan and the U.S. not share the same WWII history, the facts from which they formulate their narratives? One of the reasons for the difference is the intentional cover-up of these facts by the Truman administration. As several top officials remarked, there was notable doubt in the U.S. at the time as to whether the atomic bomb was needed. Admiral Leahy, Chief of Staff to Presidents Roosevelt and Truman, believed that “the use of the barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan.” General Dwight Eisenhower echoed this sentiment, stating: “The Japanese were ready to surrender and it wasn’t necessary to hit them with that awful thing…” Despite such voices, Truman disclosed no such disagreements and embedded into the public a view that justified the atomic bomb. He misled the public about the context in which he dropped the atomic bomb. For example, in the aforementioned speech on August 9, 1945, Truman declared that Hiroshima was a “military base,” that the U.S. targeted “to avoid, insofar as possible, the killing of civilians.” Although Hiroshima did have a military base, the atomic bomb was dropped not on the base but a heavily populated area, killing 140,000 civilians. It is quite astounding that such a misleading

description of Hiroshima could persist, but it did – and in part because of this, the perception that the atomic bomb was justified has remained ever since. Although the cover-up of some facts and dissenting opinions is a fundamental weakness of the American narrative, Japan’s narratives are also flawed. For example, the fact that the Japanese military-led government continued to push for war despite public weariness does give some justification for the U.S. to seek a method to end the war forcibly. In addition, critics claim that Japan’s nationalistic government overemphasizes Japan’s suffering from the atomic bomb to deemphasize its imperialist agressions. It is clear that biases and perspectives help shape both countries’ narratives. Both narratives have strengths and weaknesses, and that is why the narratives must be considered together for a more objective understanding of what led up to the dropping of the atomic bomb. Conclusion Today, neither the U.S. nor Japan blames the other for the past. As the embrace between Obama and Mr. Mori demonstrated, the U.S. and Japan have reconciled their histories and have forged one of the strongest alliances in the world. What they focus on now is the future: the U.S. and Japan seek to continue cooperating on difficult issues, including the management of nuclear weapons across the globe. To continue close cooperation, it is more essential than ever to meld together different narratives to forge a more objective understanding of the past. With such mutual understanding, the U.S. and Japan can strengthen their alliance and strive to have a world devoid of suffering from nuclear weapons. Our future hinges on such alliance: otherwise, history will judge us harshly.

22



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.